Nigel Bosch
University of Notre Dame
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nigel Bosch.
international conference on software engineering | 2014
Paige Rodeghero; Collin McMillan; Paul W. McBurney; Nigel Bosch; Sidney K. D'Mello
Source Code Summarization is an emerging technology for automatically generating brief descriptions of code. Current summarization techniques work by selecting a subset of the statements and keywords from the code, and then including information from those statements and keywords in the summary. The quality of the summary depends heavily on the process of selecting the subset: a high-quality selection would contain the same statements and keywords that a programmer would choose. Unfortunately, little evidence exists about the statements and keywords that programmers view as important when they summarize source code. In this paper, we present an eye-tracking study of 10 professional Java programmers in which the programmers read Java methods and wrote English summaries of those methods. We apply the findings to build a novel summarization tool. Then, we evaluate this tool and provide evidence to support the development of source code summarization systems.
IEEE Transactions on Affective Computing | 2017
Hamed Monkaresi; Nigel Bosch; Rafael A. Calvo; Sidney K. D'Mello
We explored how computer vision techniques can be used to detect engagement while students (N = 22) completed a structured writing activity (draft-feedback-review) similar to activities encountered in educational settings. Students provided engagement annotations both concurrently during the writing activity and retrospectively from videos of their faces after the activity. We used computer vision techniques to extract three sets of features from videos, heart rate, Animation Units (from Microsoft Kinect Face Tracker), and local binary patterns in three orthogonal planes (LBP-TOP). These features were used in supervised learning for detection of concurrent and retrospective self-reported engagement. Area under the ROC Curve (AUC) was used to evaluate classifier accuracy using leave-several-students-out cross validation. We achieved an AUC = .758 for concurrent annotations and AUC = .733 for retrospective annotations. The Kinect Face Tracker features produced the best results among the individual channels, but the overall best results were found using a fusion of channels.
artificial intelligence in education | 2013
Nigel Bosch; Sidney D’Mello; Caitlin Mills
We conducted a study to track the emotions, their behavioral correlates, and relationship with performance when novice programmers learned the basics of computer programming in the Python language. Twenty-nine participants without prior programming experience completed the study, which consisted of a 25 minute scaffolding phase (with explanations and hints) and a 15 minute fadeout phase (no explanations or hints) with a computerized learning environment. Emotional states were tracked via retrospective self-reports in which learners viewed videos of their faces and computer screens recorded during the learning session and made judgments about their emotions at approximately 100 points. The results indicated that flow/engaged (23%), confusion (22%), frustration (14%), and boredom (12%) were the major emotions students experienced, while curiosity, happiness, anxiety, surprise, anger, disgust, fear, and sadness were comparatively rare. The emotions varied as a function of instructional scaffolds and were systematically linked to different student behaviors (idling, constructing code, running code). Boredom, flow/engaged, and confusion were also correlated with performance outcomes. Implications of our findings for affect-sensitive learning interventions are discussed.
intelligent tutoring systems | 2014
Caitlin Mills; Nigel Bosch; Arthur C. Graesser; Sidney K. D'Mello
This research predicted behavioral disengagement using quitting behaviors while learning from instructional texts. Supervised machine learning algorithms were used to predict if students would quit an upcoming text by analyzing reading behaviors observed in previous texts. Behavioral disengagement quitting at any point during the text was predicted with an accuracy of 76.5% 48% above chance, before students even began engaging with the text. We also predicted if a student would quit reading on the first page of a text or continue reading past the first page with an accuracy of 88.5% 29% above chance, as well as if students would quit sometime after the first page with an accuracy of 81.4% 51% greater than chance. Both actual quits and predicted quits were significantly related to learning, which provides some evidence for the predictive validity of our model. Implications and future work related to ITSs are also discussed.
Ksii Transactions on Internet and Information Systems | 2016
Nigel Bosch; Sidney K. D'Mello; Jaclyn Ocumpaugh; Ryan S. Baker; Valerie J. Shute
Affect detection is a key component in intelligent educational interfaces that respond to students’ affective states. We use computer vision and machine-learning techniques to detect students’ affect from facial expressions (primary channel) and gross body movements (secondary channel) during interactions with an educational physics game. We collected data in the real-world environment of a school computer lab with up to 30 students simultaneously playing the game while moving around, gesturing, and talking to each other. The results were cross-validated at the student level to ensure generalization to new students. Classification accuracies, quantified as area under the receiver operating characteristic curve (AUC), were above chance (AUC of 0.5) for all the affective states observed, namely, boredom (AUC = .610), confusion (AUC = .649), delight (AUC = .867), engagement (AUC = .679), frustration (AUC = .631), and for off-task behavior (AUC = .816). Furthermore, the detectors showed temporal generalizability in that there was less than a 2% decrease in accuracy when tested on data collected from different times of the day and from different days. There was also some evidence of generalizability across ethnicity (as perceived by human coders) and gender, although with a higher degree of variability attributable to differences in affect base rates across subpopulations. In summary, our results demonstrate the feasibility of generalizable video-based detectors of naturalistic affect in a real-world setting, suggesting that the time is ripe for affect-sensitive interventions in educational games and other intelligent interfaces.
intelligent tutoring systems | 2014
Nigel Bosch; Yuxuan Chen; Sidney D’Mello
We built detectors capable of automatically recognizing affective states of novice computer programmers from student-annotated videos of their faces recorded during an introductory programming tutoring session. We used the Computer Expression Recognition Toolbox (CERT) to track facial features based on the Facial Action Coding System, and machine learning techniques to build classification models. Confusion/Uncertainty and Frustration were distinguished from all other affective states in a student-independent fashion at levels above chance (Cohen’s kappa = .22 and .23, respectively), but detection accuracies for Boredom, Flow/Engagement, and Neutral were lower (kappas = .04, .11, and .07). We discuss the differences between detection of spontaneous versus fixed (polled) judgments as well as the features used in the models.
Computers in Education | 2015
Valerie J. Shute; Sidney K. D'Mello; Ryan S. Baker; Kyunghwa Cho; Nigel Bosch; Jaclyn Ocumpaugh; Matthew Ventura; Victoria Almeda
This study investigated the relationships among incoming knowledge, persistence, affective states, in-game progress, and consequently learning outcomes for students using the game Physics Playground. We used structural equation modeling to examine these relations. We tested three models, obtaining a model with good fit to the data. We found evidence that both the pretest and the in-game measure of student performance significantly predicted learning outcome, while the in-game measure of performance was predicted by pretest data, frustration, and engaged concentration. Moreover, we found evidence for two indirect paths from engaged concentration and frustration to learning, via the in-game progress measure. We discuss the importance of these findings, and consider viable next steps concerning the design of effective learning supports within game environments. We model relations among various student variables and learning outcome in a game.Pretest and in-game performance significantly predict learning outcome.In-game performance is predicted by pretest data, frustration, and engagement.Two indirect paths involving frustration and engagement predict learning.
artificial intelligence in education | 2013
Caitlin Mills; Sidney D’Mello; Blair Lehman; Nigel Bosch; Amber Chauncey Strain; Arthur C. Graesser
Maintaining learner engagement is critical for all types of learning technologies. This study investigated how choice over a learning topic and the difficulty of the materials influenced mind wandering, engagement, and learning during a computerized learning task. 59 participants were randomly assigned to a text difficulty and choice condition (i.e., self-selected or experimenter-selected topic) and measures of mind wandering and engagement were collected during learning. Participants who studied the difficult version of the texts reported significantly higher rates of mind wandering (d = .41) and lower arousal both during (d = .52) and after the learning session (d = .48). Mind wandering and arousal were not affected by choice. However, participants who were assigned to study the topic they selected reported significantly more positive valence during (d = .57) but not after learning. These participants also scored substantially higher on a subsequent knowledge test (d = 1.27). These results suggest that choice and text difficulty differentially impact mind wandering, engagement, and learning and provide important considerations for the design of ITSs and serious games with a reading component.
artificial intelligence in education | 2015
Caitlin Mills; Sidney D’Mello; Nigel Bosch; Andrew Olney
Mind wandering (zoning out) can be detrimental to learning outcomes in a host of educational activities, from reading to watching video lectures, yet it has received little attention in the field of intelligent tutoring systems (ITS). In the current study, participants self-reported mind wandering during a learning session with Guru, a dialogue-based ITS for biology. On average, participants interacted with Guru for 22 minutes and reported an average of 11.5 instances of mind wandering, or one instance every two minutes. The frequency of mind wandering was compared across five different phases of Guru (Common-Ground-Building Instruction, Intermittent Summary, Concept Map, Scaffolded Dialogue, and Cloze task), each requiring different learning strategies. The rate of mind wandering per minute was highest during the Common-Ground-Building Instruction and Scaffolded Dialogue phases of Guru. Importantly, there was significant negative correlation between mind wandering and learning, highlighting the need to address this phenomena during learning with ITSs.
artificial intelligence in education | 2015
Nigel Bosch; Sidney D’Mello; Ryan S. Baker; Jaclyn Ocumpaugh; Valerie J. Shute
The goal of this paper was to explore the possibility of generalizing face-based affect detectors across multiple days, a problem which plagues physiological-based affect detection. Videos of students playing an educational physics game were collected in a noisy computer-enabled classroom environment where students conversed with each other, moved around, and gestured. Trained observers provided real-time annotations of learning-centered affective states (e.g., boredom, confusion) as well as off-task behavior. Detectors were trained using data from one day and tested on data from different students on another day. These cross-day detectors demonstrated above chance classification accuracy with average Area Under the ROC Curve (AUC, .500 is chance level) of .658, which was similar to within-day (training and testing on data collected on the same day) AUC of .667. This work demonstrates the feasibility of generalizing face-based affect detectors across time in an ecologically valid computer-enabled classroom environment.