Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph F. Grafsgaard is active.

Publication


Featured researches published by Joseph F. Grafsgaard.


affective computing and intelligent interaction | 2013

Automatically Recognizing Facial Indicators of Frustration: A Learning-centric Analysis

Joseph F. Grafsgaard; Joseph B. Wiggins; Kristy Elizabeth Boyer; Eric N. Wiebe; James C. Lester

Affective and cognitive processes form a rich substrate on which learning plays out. Affective states often influence progress on learning tasks, resulting in positive or negative cycles of affect that impact learning outcomes. Developing a detailed account of the occurrence and timing of cognitive-affective states during learning can inform the design of affective tutorial interventions. In order to advance understanding of learning-centered affect, this paper reports on a study to analyze a video corpus of computer-mediated human tutoring using an automated facial expression recognition tool that detects fine-grained facial movements. The results reveal three significant relationships between facial expression, frustration, and learning: (1) Action Unit 2 (outer brow raise) was negatively correlated with learning gain, (2) Action Unit 4 (brow lowering) was positively correlated with frustration, and (3) Action Unit 14 (mouth dimpling) was positively correlated with both frustration and learning gain. Additionally, early prediction models demonstrated that facial actions during the first five minutes were significantly predictive of frustration and learning at the end of the tutoring session. The results represent a step toward a deeper understanding of learning-centered affective states, which will form the foundation for data-driven design of affective tutoring systems.


international conference on multimodal interfaces | 2014

The Additive Value of Multimodal Features for Predicting Engagement, Frustration, and Learning during Tutoring

Joseph F. Grafsgaard; Joseph B. Wiggins; Alexandria Katarina Vail; Kristy Elizabeth Boyer; Eric N. Wiebe; James C. Lester

Detecting learning-centered affective states is difficult, yet crucial for adapting most effectively to users. Within tutoring in particular, the combined context of student task actions and tutorial dialogue shape the students affective experience. As we move toward detecting affect, we may also supplement the task and dialogue streams with rich sensor data. In a study of introductory computer programming tutoring, human tutors communicated with students through a text-based interface. Automated approaches were leveraged to annotate dialogue, task actions, facial movements, postural positions, and hand-to-face gestures. These dialogue, nonverbal behavior, and task action input streams were then used to predict retrospective student self-reports of engagement and frustration, as well as pretest/posttest learning gains. The results show that the combined set of multimodal features is most predictive, indicating an additive effect. Additionally, the findings demonstrate that the role of nonverbal behavior may depend on the dialogue and task context in which it occurs. This line of research identifies contextual and behavioral cues that may be leveraged in future adaptive multimodal systems.


affective computing and intelligent interaction | 2011

Predicting facial indicators of confusion with hidden Markov models

Joseph F. Grafsgaard; Kristy Elizabeth Boyer; James C. Lester

Affect plays a vital role in learning. During tutoring, particular affective states may benefit or detract from student learning. A key cognitiveaffective state is confusion, which has been positively associated with effective learning. Although identifying episodes of confusion presents significant challenges, recent investigations have identified correlations between confusion and specific facial movements. This paper builds on those findings to create a predictive model of learner confusion during task-oriented human-human tutorial dialogue. The model leverages textual dialogue, task, and facial expression history to predict upcoming confusion within a hidden Markov modeling framework. Analysis of the model structure also reveals meaningful modes of interaction within the tutoring sessions. The results demonstrate that because of its predictive power and rich qualitative representation, the model holds promise for informing the design of affective-sensitive tutoring systems.


artificial intelligence in education | 2013

Embodied Affect in Tutorial Dialogue: Student Gesture and Posture

Joseph F. Grafsgaard; Joseph B. Wiggins; Kristy Elizabeth Boyer; Eric N. Wiebe; James C. Lester

Recent years have seen a growing recognition of the central role of affect and motivation in learning. In particular, nonverbal behaviors such as posture and gesture provide key channels signaling affective and motivational states. Developing a clear understanding of these mechanisms will inform the development of personalized learning environments that promote successful affective and motivational outcomes. This paper investigates posture and gesture in computer-mediated tutorial dialogue using automated techniques to track posture and hand-to-face gestures. Annotated dialogue transcripts were analyzed to identify the relationships between student posture, student gesture, and tutor and student dialogue. The results indicate that posture and hand-to-face gestures are significantly associated with particular tutorial dialogue moves. Additionally, two-hands-to-face gestures occurred significantly more frequently among students with low self-efficacy. The results shed light on the cognitive-affective mechanisms that underlie these nonverbal behaviors. Collectively, the findings provide insight into the interdependencies among tutorial dialogue, posture, and gesture, revealing a new avenue for automated tracking of embodied affect during learning.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2013

Physiological Responses to Events during Training Use of Skin Conductance to Inform Future Adaptive Learning Systems

Megan Hardy; Eric N. Wiebe; Joseph F. Grafsgaard; Kristy Elizabeth Boyer; James C. Lester

Understanding the role of physiological responses within the behavioral, cognitive, and affective domains of a training intervention are an important step towards designing augmented adaptive systems that respond to the learner’s cognitive and affective states. Multiple studies have shown that specific affective states are related to learning (Craig, Graesser, Sullins, & Gholson, 2004; Graesser & D’ Mello, 2011; Kort, Reilly, & Picard, 2001). This paper explores trainees’ skin conductance responses to specific behavioral events and theorized cognitive and affective events, and their relationship to learning during a training session within the programming domain. A series of independent samples t-tests revealed that students who exhibited a skin conductance response (SCR) to the behavioral event of compile begins, as well as to affective events of displays of uncertainty, negative feedback, and minimizations of failure had significantly higher learning gain and post-test scores than students who did not exhibit a SCR to these events. These findings provide a step towards understanding the relationship between the physiological measure of skin conductance and affective experiences of the learner in the course of events during a training session, and inform the design of adaptive training and learning systems.


artificial intelligence in education | 2011

Modeling confusion: facial expression, task, and discourse in task-oriented tutorial dialogue

Joseph F. Grafsgaard; Kristy Elizabeth Boyer; Robert Phillips; James C. Lester

Recent years have seen a growing recognition of the importance of affect in learning. Efforts are being undertaken to enable intelligent tutoring systems to recognize and respond to learner emotion, but the field has not yet seen the emergence of a fully contextualized model of learner affect. This paper reports on a study of learner affect through an analysis of facial expression in human task-oriented tutorial dialogue. It extends prior work through in-depth analyses of a highly informative facial action unit and its interdependencies with dialogue utterances and task structure. The results demonstrate some ways in which learner facial expressions are dependent on both dialogue and task context. The findings also hold design implications for affect recognition and tutorial strategy selection within tutorial dialogue systems.


intelligent tutoring systems | 2012

Toward a machine learning framework for understanding affective tutorial interaction

Joseph F. Grafsgaard; Kristy Elizabeth Boyer; James C. Lester

Affect and cognition intertwine throughout human experience. Research into this interplay during learning has identified relevant cognitive-affective states, but recognizing them poses significant challenges. Among multiple promising approaches for affect recognition, analyzing facial expression may be particularly informative. Descriptive computational models of facial expression and affect, such as those enabled by machine learning, aid our understanding of tutorial interactions. Hidden Markov modeling, in particular, is useful for encoding patterns in sequential data. This paper presents a descriptive hidden Markov model built upon facial expression data and tutorial dialogue within a task-oriented human-human tutoring corpus. The model reveals five frequently occurring patterns of affective tutorial interaction across text-based tutorial dialogue sessions. The results show that hidden Markov modeling holds potential for the semi-automated understanding of affective interaction, which may contribute to the development of affect-informed intelligent tutoring systems.


international conference on multimodal interfaces | 2014

Predicting Learning and Engagement in Tutorial Dialogue: A Personality-Based Model

Alexandria Katarina Vail; Joseph F. Grafsgaard; Joseph B. Wiggins; James C. Lester; Kristy Elizabeth Boyer

A variety of studies have established that users with different personality profiles exhibit different patterns of behavior when interacting with a system. Although patterns of behavior have been successfully used to predict cognitive and affective outcomes of an interaction, little work has been done to identify the variations in these patterns based on user personality profile. In this paper, we model sequences of facial expressions, postural shifts, hand-to-face gestures, system interaction events, and textual dialogue messages of a user interacting with a human tutor in a computer-mediated tutorial session. We use these models to predict the users learning gain, frustration, and engagement at the end of the session. In particular, we examine the behavior of users based on their Extraversion trait score of a Big Five Factor personality survey. The analysis reveals a variety of personality-specific sequences of behavior that are significantly indicative of cognitive and affective outcomes. These results could impact user experience design of future interactive systems.


artificial intelligence in education | 2015

Modeling Self-Efficacy Across Age Groups with Automatically Tracked Facial Expression

Joseph F. Grafsgaard; Seung Y. Lee; Bradford W. Mott; Kristy Elizabeth Boyer; James C. Lester

Affect plays a central role in learning. Students’ facial expressions are key indicators of affective states and recent work has increasingly used automated facial expression tracking technologies as a method of affect detection. However, there has not been an investigation of facial expressions compared across age groups. The present study collected facial expressions of college and middle school students in the Crystal Island game-based learning environment. Facial expressions were tracked using the Computer Expression Recognition Toolbox and models of self-efficacy for each age group highlighted differences in facial expressions. Age-specific findings such as these will inform the development of enriched affect models for broadening populations of learners using affect-sensitive learning environments.


international conference on user modeling adaptation and personalization | 2016

Gender Differences in Facial Expressions of Affect During Learning

Alexandria Katarina Vail; Joseph F. Grafsgaard; Kristy Elizabeth Boyer; Eric N. Wiebe; James C. Lester

Affective support is crucial during learning, with recent evidence suggesting it is particularly important for female students. Facial expression is a rich channel for affect detection, but a key open question is how facial displays of affect differ by gender during learning. This paper presents an analysis suggesting that facial expressions for women and men differ systematically during learning. Using facial video automatically tagged with facial action units, we find that despite no differences between genders in incoming knowledge, self-efficacy, or personality profile, women displayed one lower facial action unit significantly more than men, while men displayed brow lowering and lip fidgeting more than women. However, numerous facial actions including brow raising and nose wrinkling were strongly correlated with learning in women, whereas only one facial action unit, eyelid raiser, was associated with learning for men. These results suggest that the entire affect adaptation pipeline, from detection to response, may benefit from gender-specific models in order to support students more effectively.

Collaboration


Dive into the Joseph F. Grafsgaard's collaboration.

Top Co-Authors

Avatar

James C. Lester

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric N. Wiebe

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Joseph B. Wiggins

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Alexandria Katarina Vail

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Michelle Taub

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Nicholas V. Mudrick

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Roger Azevedo

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Aysu Ezen-Can

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge