Joseph B. Wiggins
North Carolina State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joseph B. Wiggins.
affective computing and intelligent interaction | 2013
Joseph F. Grafsgaard; Joseph B. Wiggins; Kristy Elizabeth Boyer; Eric N. Wiebe; James C. Lester
Affective and cognitive processes form a rich substrate on which learning plays out. Affective states often influence progress on learning tasks, resulting in positive or negative cycles of affect that impact learning outcomes. Developing a detailed account of the occurrence and timing of cognitive-affective states during learning can inform the design of affective tutorial interventions. In order to advance understanding of learning-centered affect, this paper reports on a study to analyze a video corpus of computer-mediated human tutoring using an automated facial expression recognition tool that detects fine-grained facial movements. The results reveal three significant relationships between facial expression, frustration, and learning: (1) Action Unit 2 (outer brow raise) was negatively correlated with learning gain, (2) Action Unit 4 (brow lowering) was positively correlated with frustration, and (3) Action Unit 14 (mouth dimpling) was positively correlated with both frustration and learning gain. Additionally, early prediction models demonstrated that facial actions during the first five minutes were significantly predictive of frustration and learning at the end of the tutoring session. The results represent a step toward a deeper understanding of learning-centered affective states, which will form the foundation for data-driven design of affective tutoring systems.
international conference on multimodal interfaces | 2014
Joseph F. Grafsgaard; Joseph B. Wiggins; Alexandria Katarina Vail; Kristy Elizabeth Boyer; Eric N. Wiebe; James C. Lester
Detecting learning-centered affective states is difficult, yet crucial for adapting most effectively to users. Within tutoring in particular, the combined context of student task actions and tutorial dialogue shape the students affective experience. As we move toward detecting affect, we may also supplement the task and dialogue streams with rich sensor data. In a study of introductory computer programming tutoring, human tutors communicated with students through a text-based interface. Automated approaches were leveraged to annotate dialogue, task actions, facial movements, postural positions, and hand-to-face gestures. These dialogue, nonverbal behavior, and task action input streams were then used to predict retrospective student self-reports of engagement and frustration, as well as pretest/posttest learning gains. The results show that the combined set of multimodal features is most predictive, indicating an additive effect. Additionally, the findings demonstrate that the role of nonverbal behavior may depend on the dialogue and task context in which it occurs. This line of research identifies contextual and behavioral cues that may be leveraged in future adaptive multimodal systems.
artificial intelligence in education | 2013
Joseph F. Grafsgaard; Joseph B. Wiggins; Kristy Elizabeth Boyer; Eric N. Wiebe; James C. Lester
Recent years have seen a growing recognition of the central role of affect and motivation in learning. In particular, nonverbal behaviors such as posture and gesture provide key channels signaling affective and motivational states. Developing a clear understanding of these mechanisms will inform the development of personalized learning environments that promote successful affective and motivational outcomes. This paper investigates posture and gesture in computer-mediated tutorial dialogue using automated techniques to track posture and hand-to-face gestures. Annotated dialogue transcripts were analyzed to identify the relationships between student posture, student gesture, and tutor and student dialogue. The results indicate that posture and hand-to-face gestures are significantly associated with particular tutorial dialogue moves. Additionally, two-hands-to-face gestures occurred significantly more frequently among students with low self-efficacy. The results shed light on the cognitive-affective mechanisms that underlie these nonverbal behaviors. Collectively, the findings provide insight into the interdependencies among tutorial dialogue, posture, and gesture, revealing a new avenue for automated tracking of embodied affect during learning.
international conference on multimodal interfaces | 2014
Alexandria Katarina Vail; Joseph F. Grafsgaard; Joseph B. Wiggins; James C. Lester; Kristy Elizabeth Boyer
A variety of studies have established that users with different personality profiles exhibit different patterns of behavior when interacting with a system. Although patterns of behavior have been successfully used to predict cognitive and affective outcomes of an interaction, little work has been done to identify the variations in these patterns based on user personality profile. In this paper, we model sequences of facial expressions, postural shifts, hand-to-face gestures, system interaction events, and textual dialogue messages of a user interacting with a human tutor in a computer-mediated tutorial session. We use these models to predict the users learning gain, frustration, and engagement at the end of the session. In particular, we examine the behavior of users based on their Extraversion trait score of a Big Five Factor personality survey. The analysis reveals a variety of personality-specific sequences of behavior that are significantly indicative of cognitive and affective outcomes. These results could impact user experience design of future interactive systems.
artificial intelligence in education | 2017
Lydia Pezzullo; Joseph B. Wiggins; Megan Hardy Frankosky; Wookhee Min; Kristy Elizabeth Boyer; Bradford W. Mott; Eric N. Wiebe; James C. Lester
Virtual learning companions have shown significant potential for supporting students. However, there appear to be gender differences in their effectiveness. In order to support all students well, it is important to develop a deeper understanding of the role that student gender plays during interactions with learning companions. This paper reports on a study to explore the impact of student gender and learning companion design. In a three-condition study, we examine middle school students’ interactions in a game-based learning environment that featured one of the following: (1) a learning companion deeply integrated into the narrative of the game; (2) a learning companion whose backstory and personality were not integrated into the narrative but who provided equivalent task support; and (3) no learning companion. The results show that girls were significantly more engaged than boys, particularly with the narrative-integrated agent, while boys reported higher mental demand with that agent. Even when controlling for video game experience and prior knowledge, the gender effects held. These findings contribute to the growing understanding that learning companions must adapt to students’ gender in order to facilitate the most effective learning interactions.
technical symposium on computer science education | 2013
Joseph F. Grafsgaard; Joseph B. Wiggins; Kristy Elizabeth Boyer; Eric N. Wiebe; James C. Lester
Understanding how students solve computational problems is central to computer science education research. This goal is facilitated by recent advances in the availability and analysis of detailed multimodal data collected during student learning. Drawing on research into student problem-solving processes and findings on human posture and gesture, this poster utilizes a multimodal learning analytics framework that links automatically identified posture and gesture features with student problem-solving and dialogue events during one-on-one human tutoring of introductory computer science. The findings provide new insight into how bodily movements occur during computer science tutoring, and lay the foundation for programming feedback tools and deep analyses of student learning processes.
artificial intelligence in education | 2017
Joseph B. Wiggins; Joseph F. Grafsgaard; Kristy Elizabeth Boyer; Eric N. Wiebe; James C. Lester
In recent years, significant advances have been made in intelligent tutoring systems, and these advances hold great promise for adaptively supporting computer science (CS) learning. In particular, tutorial dialogue systems that engage students in natural language dialogue can create rich, adaptive interactions. A promising approach to increasing the effectiveness of these systems is to adapt not only to problem-solving performance, but also to a student’s characteristics. Self-efficacy refers to a student’s view of her ability to complete learning objectives and to achieve goals; this characteristic may be particularly influential during tutorial dialogue for computer science education. This article examines a corpus of effective human tutoring for computer science to discover the extent to which considering self-efficacy as measured within a pre-survey, coupled with dialogue and task events during tutoring, improves models that predict the student’s self-reported frustration and learning gains after tutoring. The analysis reveals that students with high and low self-efficacy benefit differently from tutorial dialogue. Student control, social dialogue, and tutor moves to increase efficiency, may be particularly helpful for high self-efficacy students, while for low self-efficacy students, guided experimentation may foster greater learning while at the same time potentially increasing frustration. It is hoped that this line of research will enable tutoring systems for computer science to tailor their tutorial interactions more effectively.
educational data mining | 2013
Joseph F. Grafsgaard; Joseph B. Wiggins; Kristy Elizabeth Boyer; Eric N. Wiebe; James C. Lester
educational data mining | 2014
Joseph F. Grafsgaard; Joseph B. Wiggins; Kristy Elizabeth Boyer; Eric N. Wiebe; James C. Lester
technical symposium on computer science education | 2015
Joseph B. Wiggins; Kristy Elizabeth Boyer; Alok Baikadi; Aysu Ezen-Can; Joseph F. Grafsgaard; Eunyoung Ha; James C. Lester; Christopher Michael Mitchell; Eric N. Wiebe