Kai-min Chang
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kai-min Chang.
intelligent tutoring systems | 2008
Joseph E. Beck; Kai-min Chang; Jack Mostow; Albert T. Corbett
Most ITS have a means of providing assistance to the student, either on student request or when the tutor determines it would be effective. Presumably, such assistance is included by the ITS designers since they feel it benefits the students. However, whether--and how--help helps students has not been a well studied problem in the ITS community. In this paper we present three approaches for evaluating the efficacy of the Reading Tutors help: creating experimental trials from data, learning decomposition, and Bayesian Evaluation and Assessment, an approach that uses dynamic Bayesian networks. We have found that experimental trials and learning decomposition both find a negative benefit for help---that is, help hurts! However, the Bayesian Evaluation and Assessment framework finds that help both promotes student long-term learning and provides additional scaffolding on the current problem. We discuss why these approaches give divergent results, and suggest that the Bayesian Evaluation and Assessment framework is the strongest of the three. In addition to introducing Bayesian Evaluation and Assessment, a method for simultaneously assessing students and evaluating tutorial interventions, this paper describes how help can both scaffold the current problem attempt as well as teach the student knowledge that will transfer to later problems.
intelligent tutoring systems | 2006
Kai-min Chang; Joseph E. Beck; Jack Mostow; Albert T. Corbett
This paper describes an effort to model a students changing knowledge state during skill acquisition. Dynamic Bayes Nets (DBNs) provide a powerful way to represent and reason about uncertainty in time series data, and are therefore well-suited to model student knowledge. Many general-purpose Bayes net packages have been implemented and distributed; however, constructing DBNs often involves complicated coding effort. To address this problem, we introduce a tool called BNT-SM. BNT-SM inputs a data set and a compact XML specification of a Bayes net model hypothesized by a researcher to describe causal relationships among student knowledge and observed behavior. BNT-SM generates and executes the code to train and test the model using the Bayes Net Toolbox [1]. Compared to the BNT code it outputs, BNT-SM reduces the number of lines of code required to use a DBN by a factor of 5. In addition to supporting more flexible models, we illustrate how to use BNT-SM to simulate Knowledge Tracing (KT) [2], an established technique for student modeling. The trained DBN does a better job of modeling and predicting student performance than the original KT code (Area Under Curve = 0.610 > 0.568), due to differences in how it estimates parameters.
Psychological Review | 2014
Alan Jern; Kai-min Chang; Charles Kemp
Belief polarization occurs when 2 people with opposing prior beliefs both strengthen their beliefs after observing the same data. Many authors have cited belief polarization as evidence of irrational behavior. We show, however, that some instances of polarization are consistent with a normative account of belief revision. Our analysis uses Bayesian networks to characterize different kinds of relationships between hypotheses and data, and distinguishes between cases in which normative reasoners with opposing beliefs should both strengthen their beliefs, cases in which both should weaken their beliefs, and cases in which one should strengthen and the other should weaken his or her belief. We apply our analysis to several previous studies of belief polarization and present a new experiment that suggests that people tend to update their beliefs in the directions predicted by our normative account.
conference on computer supported cooperative work | 2015
Xuanchong Li; Kai-min Chang; Yueran Yuan; Alexander G. Hauptmann
Massive Open Online Courses (MOOCs) enable everyone to receive high-quality education. However, current MOOC creators cannot provide an effective, economical, and scalable method to detect cheating on tests, which would be required for any certification. In this paper, we propose a Massive Open Online Proctoring (MOOP) framework, which combines both automatic and collaborative approaches to detect cheating behaviors in online tests. The MOOP framework consists of three major components: Automatic Cheating Detector (ACD), Peer Cheating Detector (PCD), and Final Review Committee (FRC). ACD uses webcam video or other sensors to monitor students and automatically flag suspected cheating behavior. Ambiguous cases are then sent to the PCD, where students peer-review flagged webcam video to confirm suspicious cheating behaviors. Finally, the list of suspicious cheating behaviors is sent to the FRC to make the final punishing decision. Our experiment show that ACD and PCD can detect usage of a cheat sheet with good accuracy and can reduce the overall human resources required to monitor MOOCs for cheating.
Acta Psychologica | 2010
Charles Kemp; Kai-min Chang; Luigi Lombardi
This paper considers a family of inductive problems where reasoners must identify familiar categories or features on the basis of limited information. Problems of this kind are encountered, for example, when word learners acquire novel labels for pre-existing concepts. We develop a probabilistic model of identification and evaluate it in three experiments. Our first two experiments explore problems where a single category or feature must be identified, and our third experiment explores cases where participants must combine several pieces of information in order to simultaneously identify a category and a feature. Humans readily solve all of these problems, and we show that our model accounts for human inferences better than several alternative approaches.
learning analytics and knowledge | 2014
Yueran Yuan; Kai-min Chang; Jessica Nelson Taylor; Jack Mostow
Assessment of reading comprehension can be costly and obtrusive. In this paper, we use inexpensive EEG to detect reading comprehension of readers in a school environment. We use EEG signals to produce above-chance predictors of student performance on end-of-sentence cloze questions. We also attempt (unsuccessfully) to distinguish among student mental states evoked by distracters that violate either syntactic, semantic, or contextual constraints. In total, this work investigates the practicality of classroom use of inexpensive EEG devices as an unobtrusive measure of reading comprehension.
international conference on multimodal interfaces | 2012
Seshadri Sridharan; Yun-Nung Chen; Kai-min Chang; Alexander I. Rudnicky
Understanding user intent is a difficult problem in Dialog Systems, as they often need to make decisions under uncertainty. Using an inexpensive, consumer grade EEG sensor and a Wizard-of-Oz dialog system, we show that it is possible to detect system misunderstanding even before the user reacts vocally. We also present the design and implementation details of NeuroDialog, a proof-of-concept dialog system that uses an EEG based predictive model to detect system misrecognitions during live interaction.
international conference on user modeling, adaptation, and personalization | 2007
Joseph E. Beck; Kai-min Chang
artificial intelligence in education | 2011
Jack Mostow; Kai-min Chang; Jessica Nelson
aied workshops | 2013
Haohan Wang; Yiwei Li; Xiaobo Hu; Yucong Yang; Zhu Meng; Kai-min Chang