Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Scotty D. Craig is active.

Publication


Featured researches published by Scotty D. Craig.


Journal of Educational Media | 2004

Affect and learning: an exploratory look into the role of affect in learning with AutoTutor

Scotty D. Craig; Arthur C. Graesser; Jeremiah Sullins; Barry Gholson

The role that affective states play in learning was investigated from the perspective of a constructivist learning framework. We observed six different affect states (frustration, boredom, flow, confusion, eureka and neutral) that potentially occur during the process of learning introductory computer literacy with AutoTutor, an intelligent tutoring system with tutorial dialogue in natural language. Observational analyses revealed significant relationships between learning and the affective states of boredom, flow and confusion. The positive correlation between confusion and learning is consistent with a model that assumes that cognitive disequilibrium is one precursor to deep learning. The findings that learning correlates negatively with boredom and positively with flow are consistent with predictions from Csikszentmihalyis analysis of flow experiences.


Journal of Educational Psychology | 2002

Animated pedagogical agents in multimedia educational environments: Effects of agent properties, picture features, and redundancy

Scotty D. Craig; Barry Gholson; David M. Driscoll

Two experiments explored the integration of animated agents into multimedia environments in the context of R. E. Mayers (2001) cognitive theory of multimedia learning. Experiment 1 was a 3 (agent properties: agent only, agent with gesture, no agent) X 3 (picture features: static picture, sudden onset, animation) design. Agent properties produced no significant effects. Both sudden onset and animation conditions facilitated performance relative to the static-picture condition. In Experiment 2, we explored the effects of printed text, spoken narration, and spoken narration with the printed text, in a multimedia environment that included an agent, to investigate effects of redundancy. The spoken-narration-only condition outperformed the other 2, with no differences between printed text and printed text with spoken narration.


User Modeling and User-adapted Interaction | 2008

Automatic detection of learner's affect from conversational cues

Sidney K. D'Mello; Scotty D. Craig; Amy Witherspoon; Bethany McDaniel; Arthur C. Graesser

We explored the reliability of detecting a learner’s affect from conversational features extracted from interactions with AutoTutor, an intelligent tutoring system (ITS) that helps students learn by holding a conversation in natural language. Training data were collected in a learning session with AutoTutor, after which the affective states of the learner were rated by the learner, a peer, and two trained judges. Inter-rater reliability scores indicated that the classifications of the trained judges were more reliable than the novice judges. Seven data sets that temporally integrated the affective judgments with the dialogue features of each learner were constructed. The first four datasets corresponded to the judgments of the learner, a peer, and two trained judges, while the remaining three data sets combined judgments of two or more raters. Multiple regression analyses confirmed the hypothesis that dialogue features could significantly predict the affective states of boredom, confusion, flow, and frustration. Machine learning experiments indicated that standard classifiers were moderately successful in discriminating the affective states of boredom, confusion, flow, frustration, and neutral, yielding a peak accuracy of 42% with neutral (chancexa0=xa020%) and 54% without neutral (chancexa0=xa025%). Individual detections of boredom, confusion, flow, and frustration, when contrasted with neutral affect, had maximum accuracies of 69, 68, 71, and 78%, respectively (chancexa0=xa050%). The classifiers that operated on the emotion judgments of the trained judges and combined models outperformed those based on judgments of the novices (i.e., the self and peer). Follow-up classification analyses that assessed the degree to which machine-generated affect labels correlated with affect judgments provided by humans revealed that human-machine agreement was on par with novice judges (self and peer) but quantitatively lower than trained judges. We discuss the prospects of extending AutoTutor into an affect-sensing ITS.


Cognition and Instruction | 2006

The Deep-Level-Reasoning-Question Effect: The Role of Dialogue and Deep-Level-Reasoning Questions During Vicarious Learning

Scotty D. Craig; Jeremiah Sullins; Amy Witherspoon; Barry Gholson

We investigated the impact of dialogue and deep-level-reasoning questions on vicarious learning in 2 studies with undergraduates. In Experiment 1, participants learned material by interacting with AutoTutor or by viewing 1 of 4 vicarious learning conditions: a noninteractive recorded version of the AutoTutor dialogues, a dialogue with a deep-level-reasoning question preceding each sentence, a dialogue with a deep-level-reasoning question preceding half of the sentences, or a monologue. Learners in the condition where a deep-level-reasoning question preceded each sentence significantly outperformed those in the other 4 conditions. Experiment 2 included the same interactive and noninteractive recorded condition, along with 2 vicarious learning conditions involving deep-level-reasoning questions. Both deep-level-reasoning-question conditions significantly outperformed the other conditions. These findings provide evidence that deep-level-reasoning questions improve vicarious learning.


Cognition & Emotion | 2008

Emote aloud during learning with AutoTutor: Applying the Facial Action Coding System to cognitive–affective states during learning

Scotty D. Craig; Sidney K. D'Mello; Amy Witherspoon; Arthur C. Graesser

In an attempt to discover the facial action units for affective states that occur during complex learning, this study adopted an emote-aloud procedure in which participants were recorded as they verbalised their affective states while interacting with an intelligent tutoring system (AutoTutor). Participants’ facial expressions were coded by two expert raters using Ekmans Facial Action Coding System and analysed using association rule mining techniques. The two expert raters received an overall kappa that ranged between .76 and .84. The association rule mining analysis uncovered facial actions associated with confusion, frustration, and boredom. We discuss these rules and the prospects of enhancing AutoTutor with non-intrusive affect-sensitive capabilities.


The international journal of learning | 2009

Multimethod assessment of affective experience and expression during deep learning

Sidney K. D'Mello; Scotty D. Craig; Arthur C. Graesser

Inquiries into the link between affect and learning require robust methodologies to measure the learners affective states. We describe two studies that utilised either an online or offline methodology to detect the affective states of a learner during a tutorial session with AutoTutor. The online study relied on self-reports for affect judgements, while the offline study considered the judgements by the learner, a peer and two trained judges. The studies also investigated the relationships between facial features, conversational cues and emotional expressions in an attempt to scaffold the development of computer algorithms to automatically detect learners emotions. Both methodologies showed that boredom, confusion and frustration are the prominent affective states during learning with AutoTutor. For both methodologies, there were also some relationships involving patterns of facial activity and conversational cues that were diagnostic of emotional expressions.


Journal of Educational Computing Research | 2003

Vicarious Learning: Effects of Overhearing Dialog and Monologue-Like Discourse in a Virtual Tutoring Session.

David M. Driscoll; Scotty D. Craig; Barry Gholson; Matthew Ventura; Xiangen Hu; Arthur C. Graesser

In two experiments, students overheard two computer-controlled virtual agents discussing four computer literacy topics in dialog discourse and four in monologue discourse. In Experiment 1, the virtual tutee asked a series of deep questions in the dialog condition, but only one per topic in the monologue condition in both studies. In the dialog conditions of Experiment 2, the virtual tutee asked either deep questions, shallow questions, or made comments. In a fourth “dialog” condition, the comments were spoken by the virtual tutor. The discourse spoken by the virtual tutor was identical in the dialog and monologue conditions, except in the fourth dialog condition. In both studies, learners wrote significantly more content and significantly more relevant content in the deep question condition than in the monologue condition. No other differences were significant. Results were discussed in terms of advanced organizers, schema theory, and discourse comprehension theory.


international conference on human computer interaction | 2009

Responding to Learners' Cognitive-Affective States with Supportive and Shakeup Dialogues

Sidney K. D'Mello; Scotty D. Craig; Karl Fike; Arthur C. Graesser

This paper describes two affect-sensitive variants of an existing intelligent tutoring system called AutoTutor. The new versions of AutoTutor detect learners boredom, confusion, and frustration by monitoring conversational cues, gross body language, and facial features. The sensed cognitive-affective states are used to select AutoTutors pedagogical and motivational dialogue moves and to drive the behavior of an embodied pedagogical agent that expresses emotions through verbal content, facial expressions, and affective speech. The first version, called the Supportive AutoTutor, addresses the presence of the negative states by providing empathetic and encouraging responses. The Supportive AutoTutor attributes the source of the learners emotions to the material or itself, but never directly to the learner. In contrast, the second version, called the Shakeup AutoTutor, takes students to task by directly attributing the source of the emotions to the learners themselves and responding with witty, skeptical, and enthusiastic responses. This paper provides an overview of our theoretical framework, and the design of the Supportive and Shakeup tutors.


International Journal of Speech Technology | 2001

AutoTutor: Incorporating Back-Channel Feedback and Other Human-Like Conversational Behaviors into an Intelligent Tutoring System

Sonya Rajan; Scotty D. Craig; Barry Gholson; Natalie K. Person; Arthur C. Graesser

This paper describes our recent attempts to incorporate human-like conversational behaviors into the dialog moves delivered by an animated pedagogical agent that simulates human tutors. We first present a brief overview of the modules comprising AutoTutor, an intelligent tutoring system. The second section describes a set of conversational behaviors that are being incorporated into AutoTutor. The behaviors of interest involve variations in intonation, head movements, arm and hand movements, facial expressions, eye blinking, gaze direction, and back-channel feedback. The final section presents a recent empirical study concerned with back-channel feedback events during human-to-human tutoring sessions. The back-channel feedback events emitted by tutors are mostly positive (63%), mostly verbal (77%), and immediately follow speech-act boundaries or noun-phrase boundaries (83%). Tutors also deliver back-channelevents at a very high rate when students are emitting dialog, about 13 events per minute. Conversely, 88% of students back-channel feedback events are head nods, and they occur at unbounded locations (63%).


Computers in Education | 2013

The impact of a technology-based mathematics after-school program using ALEKS on student's knowledge and behaviors

Scotty D. Craig; Xiangen Hu; Arthur C. Graesser; Anna E. Bargagliotti; Allan Sterbinsky; Kyle R. Cheney; Theresa M. Okwumabua

The effectiveness of using the Assessment and LEarning in Knowledge Spaces (ALEKS) system, an Intelligent Tutoring System for mathematics, as a method of strategic intervention in after-school settings to improve the mathematical skills of struggling students was examined using a randomized experimental design with two groups. As part of a 25-week program, student volunteers were randomly assigned to either a teacher-led classroom or a classroom in which students interacted with ALEKS while teachers were present. Students math performance, conduct, involvement, and assistance was needed to complete tasks were investigated to determine overall impact of the two programs. Students assigned to the ALEKS classrooms performed at the same level as students taught by expert teachers on the Tennessee Comprehensive Assessment Program (TCAP), which is given annually to all Tennessee students. Furthermore, students conduct and involvement remained at the same levels in both conditions. However, students in the ALEKS after-school classrooms required significantly less assistance in mathematics from teachers to complete their daily work.

Collaboration


Dive into the Scotty D. Craig's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Xie

University of Memphis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge