Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sujith M. Gowda is active.

Publication


Featured researches published by Sujith M. Gowda.


learning analytics and knowledge | 2013

Affective states and state tests: investigating how affect throughout the school year predicts end of year learning outcomes

Zachary A. Pardos; Ryan S. Baker; Maria Ofelia Clarissa Z. San Pedro; Sujith M. Gowda; Supreeth M. Gowda

In this paper, we investigate the correspondence between student affect in a web-based tutoring platform throughout the school year and learning outcomes at the end of the year, on a high-stakes mathematics exam. The relationships between affect and learning outcomes have been previously studied, but not in a manner that is both longitudinal and finer-grained. Affect detectors are used to estimate student affective states based on post-hoc analysis of tutor log-data. For every student action in the tutor the detectors give us an estimated probability that the student is in a state of boredom, engaged concentration, confusion, and frustration, and estimates of the probability that they are exhibiting off-task or gaming behaviors. We ran the detectors on two years of log-data from 8th grade student use of the ASSISTments math tutoring system and collected corresponding end of year, high stakes, state math test scores for the 1,393 students in our cohort. By correlating these data sources, we find that boredom during problem solving is negatively correlated with performance, as expected; however, boredom is positively correlated with performance when exhibited during scaffolded tutoring. A similar pattern is unexpectedly seen for confusion. Engaged concentration and frustration are both associated with positive learning outcomes, surprisingly in the case of frustration.


Sigkdd Explorations | 2012

The sum is greater than the parts: ensembling models of student knowledge in educational software

Pardos Zachary A; Sujith M. Gowda; Ryan S. Baker; Neil T. Heffernan

Many competing models have been proposed in the past decade for predicting student knowledge within educational software. Recent research attempted to combine these models in an effort to improve performance but have yielded inconsistent results. While work in the 2010 KDD Cup data set showed the benefits of ensemble methods, work in the Genetics Tutor failed to show similar benefits. We hypothesize that the key factor has been data set size. We explore the potential for improving student performance prediction with ensemble methods in a data set drawn from a different tutoring system, the ASSISTments Platform, which contains 15 times the number of responses of the Genetics Tutor data set. We evaluated the predictive performance of eight student models and eight methods of ensembling predictions. Within this data set, ensemble approaches were more effective than any single method with the best ensemble approach producing predictions of student performance 10% better than the best individual student knowledge model.


international conference on user modeling adaptation and personalization | 2010

Contextual slip and prediction of student performance after use of an intelligent tutor

Ryan S. Baker; Albert T. Corbett; Sujith M. Gowda; Angela Z. Wagner; Benjamin A. MacLaren; Linda R. Kauffman; Aaron P. Mitchell; Stephen Giguere

Intelligent tutoring systems that utilize Bayesian Knowledge Tracing have achieved the ability to accurately predict student performance not only within the intelligent tutoring system, but on paper post-tests outside of the system Recent work has suggested that contextual estimation of student guessing and slipping leads to better prediction within the tutoring software (Baker, Corbett, & Aleven, 2008a, 2008b) However, it is not yet clear whether this new variant on knowledge tracing is effective at predicting the latent student knowledge that leads to successful post-test performance In this paper, we compare the Contextual-Guess-and-Slip variant on Bayesian Knowledge Tracing to classical four-parameter Bayesian Knowledge Tracing and the Individual Difference Weights variant of Bayesian Knowledge Tracing (Corbett & Anderson, 1995), investigating how well each model variant predicts post-test performance We also test other ways to utilize contextual estimation of slipping within the tutor in post-test prediction, and discuss hypotheses for why slipping during tutor use is a significant predictor of post-test performance, even after Bayesian Knowledge Tracing estimates are controlled for.


artificial intelligence in education | 2011

Towards predicting future transfer of learning

Ryan S. Baker; Sujith M. Gowda; Albert T. Corbett

We present an automated detector that can predict a students future performance on a transfer post-test, a post-test involving related but different skills than the skills studied in the tutoring system, within an Intelligent Tutoring System for College Genetics. We show that this detector predicts transfer better than Bayesian Knowledge Tracing, a measure of student learning in intelligent tutors that has been shown to predict performance on paper post-tests of the same skills studied in the intelligent tutor. We also find that this detector only needs limited amounts of student data (the first 20% of a students data from a tutor lesson) in order to reach near-asymptotic predictive power.


international conference on user modeling adaptation and personalization | 2011

Ensembling predictions of student knowledge within intelligent tutoring systems

Ryan S. Baker; Zachary A. Pardos; Sujith M. Gowda; Bahador B. Nooraei; Neil T. Heffernan

Over the last decades, there have been a rich variety of approaches towards modeling student knowledge and skill within interactive learning environments. There have recently been several empirical comparisons as to which types of student models are better at predicting future performance, both within and outside of the interactive learning environment. However, these comparisons have produced contradictory results. Within this paper, we examine whether ensemble methods, which integrate multiple models, can produce prediction results comparable to or better than the best of nine student modeling frameworks, taken individually. We ensemble model predictions within a Cognitive Tutor for Genetics, at the level of predicting knowledge action-by-action within the tutor. We evaluate the predictions in terms of future performance within the tutor and on a paper post-test. Within this data set, we do not find evidence that ensembles of models are significantly better. Ensembles of models perform comparably to or slightly better than the best individual models, at predicting future performance within the tutor software. However, the ensembles of models perform marginally significantly worse than the best individual models, at predicting post-test performance.


artificial intelligence in education | 2013

Towards an Understanding of Affect and Knowledge from Student Interaction with an Intelligent Tutoring System

Maria Ofelia Clarissa Z. San Pedro; Ryan S. Baker; Sujith M. Gowda; Neil T. Heffernan

Csikszentmihalyi’s Flow theory states that a balance between challenge and skill leads to high engagement, overwhelming challenge leads to anxiety or frustration, and insufficient challenge leads to boredom. In this paper, we test this theory within the context of student interaction with an intelligent tutoring system. Automated detectors of student affect and knowledge were developed, validated, and applied to a large data set. The results did not match Flow theory: boredom was more common for poorly-known material, and frustration was common both for very difficult material and very easy material. These results suggest that design for optimal engagement within online learning may require further study of the factors leading students to become bored on difficult material, and frustrated on very well-known material.


artificial intelligence in education | 2013

Towards Automatically Detecting Whether Student Learning is Shallow

Sujith M. Gowda; Ryan S. Baker; Albert T. Corbett; Lisa M. Rossi

Recent research has extended student modeling to infer not just whether a student knows a skill or set of skills, but also whether the student has achieved robust learning—learning that enables the student to transfer their knowledge and prepares them for future learning (PFL). However, a student may fail to have robust learning in two fashions: they may have no learning, or they may have shallow learning (learning that applies only to the current skill, and does not support transfer or PFL). Within this paper, we present automated detectors which identify shallow learners, who are likely to need different intervention than students who have not yet learned at all. These detectors are developed using K* machine learned models, with data from college students learning introductory genetics from an intelligent tutoring system.


intelligent tutoring systems | 2012

Towards automatically detecting whether student learning is shallow

Ryan S. Baker; Sujith M. Gowda; Albert T. Corbett; Jaclyn Ocumpaugh

Recent research has extended student modeling to infer not just whether a student knows a skill or set of skills, but also whether the student has achieved robust learning --- learning that leads the student to be able to transfer their knowledge and prepares them for future learning (PFL). However, a student may fail to have robust learning in two fashions: they may have no learning, or they may have shallow learning (learning that applies only to the current skill, and does not support transfer or PFL). Within this paper, we present an automated detector which is able to identify shallow learners, who are likely to need different intervention than students who have not yet learned at all. This detector is developed using a step regression approach, with data from college students learning introductory genetics from an intelligent tutoring system.


international conference on user modeling, adaptation, and personalization | 2014

Extending Log-Based Affect Detection to a Multi-User Virtual Environment for Science

Ryan S. Baker; Jaclyn Ocumpaugh; Sujith M. Gowda; Amy M. Kamarainen; Shari Metcalf

The application of educational data mining (EDM) techniques to interactive learning software is increasingly being used to broaden the range of constructs typically incorporated in student models, moving from traditional assessment of student knowledge to the assessment of engagement, affect, strategy, and metacognition. Researchers are also broadening the range of environments within which these constructs are assessed. In this study, we develop sensor-free affect detection for EcoMUVE, an immersive multi-user virtual environment that teaches middle-school students about casualty in ecosystems. In this study, models were constructed for five different educationally-relevant affective states (boredom, confusion, delight, engaged concentration, and frustration). Such models allow us to examine the behaviors most closely associated with particular affective states, paving the way for the design of adaptive personalization to improve engagement and learning.


The Journal of the Learning Sciences | 2013

Predicting Robust Learning with the Visual Form of the Moment-by-Moment Learning Curve.

Ryan S. Baker; Arnon Hershkovitz; Lisa M. Rossi; Adam B. Goldstein; Sujith M. Gowda

We present a new method for analyzing a students learning over time for a specific skill: analysis of the graph of the students moment-by-moment learning over time. Moment-by-moment learning is calculated using a data-mined model that assesses the probability that a student learned a skill or concept at a specific time during learning (Baker, Goldstein, & Heffernan, 2010, 2011). Two coders labeled data from students who used an intelligent tutoring system for college genetics. They coded in terms of 7 forms that the moment-by-moment learning curve can take. These labels are correlated to test data on the robustness of students’ learning. We find that different visual forms are correlated with very different learning outcomes. This work suggests that analysis of moment-by-moment learning curves may be able to shed light on the implications of students’ different patterns of learning over time.

Collaboration


Dive into the Sujith M. Gowda's collaboration.

Top Co-Authors

Avatar

Ryan S. Baker

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Albert T. Corbett

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Angela Z. Wagner

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Neil T. Heffernan

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lisa M. Rossi

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Aaron P. Mitchell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Linda R. Kauffman

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge