Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sidney K. D'Mello is active.

Publication


Featured researches published by Sidney K. D'Mello.


IEEE Transactions on Affective Computing | 2010

Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications

Rafael A. Calvo; Sidney K. D'Mello

This survey describes recent progress in the field of Affective Computing (AC), with a focus on affect detection. Although many AC researchers have traditionally attempted to remain agnostic to the different emotion theories proposed by psychologists, the affective technologies being developed are rife with theoretical assumptions that impact their effectiveness. Hence, an informed and integrated examination of emotion theories from multiple areas will need to become part of computing practice if truly effective real-world systems are to be achieved. This survey discusses theoretical perspectives that view emotions as expressions, embodiments, outcomes of cognitive appraisal, social constructs, products of neural circuitry, and psychological interpretations of basic feelings. It provides meta-analyses on existing reviews of affect detection systems that focus on traditional affect detection modalities like physiology, face, and voice, and also reviews emerging research on more novel channels such as text, body language, and complex multimodal systems. This survey explicitly explores the multidisciplinary foundation that underlies all AC applications by describing how AC researchers have incorporated psychological theories of emotion and how these theories affect research questions, methods, results, and their interpretations. In this way, models and methods can be compared, and emerging insights from various disciplines can be more expertly integrated.


Journal of Personality and Social Psychology | 2014

Boring but Important: A Self-Transcendent Purpose for Learning Fosters Academic Self-Regulation

David S. Yeager; Marlone D. Henderson; David Paunesku; Greg Walton; Sidney K. D'Mello; Brian James Spitzer; Angela L. Duckworth

Many important learning tasks feel uninteresting and tedious to learners. This research proposed that promoting a prosocial, self-transcendent purpose could improve academic self-regulation on such tasks. This proposal was supported in 4 studies with over 2,000 adolescents and young adults. Study 1 documented a correlation between a self-transcendent purpose for learning and self-reported trait measures of academic self-regulation. Those with more of a purpose for learning also persisted longer on a boring task rather than giving in to a tempting alternative and, many months later, were less likely to drop out of college. Study 2 addressed causality. It showed that a brief, one-time psychological intervention promoting a self-transcendent purpose for learning could improve high school science and math grade point average (GPA) over several months. Studies 3 and 4 were short-term experiments that explored possible mechanisms. They showed that the self-transcendent purpose manipulation could increase deeper learning behavior on tedious test review materials (Study 3), and sustain self-regulation over the course of an increasingly boring task (Study 4). More self-oriented motives for learning--such as the desire to have an interesting or enjoyable career--did not, on their own, consistently produce these benefits (Studies 1 and 4).


ACM Computing Surveys | 2015

A Review and Meta-Analysis of Multimodal Affect Detection Systems

Sidney K. D'Mello; Jacqueline M. Kory

Affect detection is an important pattern recognition problem that has inspired researchers from several areas. The field is in need of a systematic review due to the recent influx of Multimodal (MM) affect detection systems that differ in several respects and sometimes yield incompatible results. This article provides such a survey via a quantitative review and meta-analysis of 90 peer-reviewed MM systems. The review indicated that the state of the art mainly consists of person-dependent models (62.2% of systems) that fuse audio and visual (55.6%) information to detect acted (52.2%) expressions of basic emotions and simple dimensions of arousal and valence (64.5%) with feature- (38.9%) and decision-level (35.6%) fusion techniques. However, there were also person-independent systems that considered additional modalities to detect nonbasic emotions and complex dimensions using model-level fusion techniques. The meta-analysis revealed that MM systems were consistently (85% of systems) more accurate than their best unimodal counterparts, with an average improvement of 9.83% (median of 6.60%). However, improvements were three times lower when systems were trained on natural (4.59%) versus acted data (12.7%). Importantly, MM accuracy could be accurately predicted (cross-validated R2 of 0.803) from unimodal accuracies and two system-level factors. Theoretical and applied implications and recommendations are discussed.


Cognition & Emotion | 2011

The half-life of cognitive-affective states during complex learning

Sidney K. D'Mello; Arthur C. Graesser

We investigated the temporal dynamics of students’ cognitive-affective states (confusion, frustration, boredom, engagement/flow, delight, and surprise) during deep learning activities. After a learning session with an intelligent tutoring system with conversational dialogue, the cognitive-affective states of the learner were classified by the learner, a peer, and two trained judges at approximately 100 points in the tutorial session. Decay rates for the cognitive-affective states were estimated by fitting exponential curves to time series of affect responses. The results partially confirmed predictions of goal-appraisal theories of emotion by supporting a tripartite classification of the states along a temporal dimension: persistent states (boredom, engagement/flow, and confusion), transitory states (delight and surprise), and an intermediate state (frustration). Patterns of decay rates were generally consistent across affect judges, except that a reversed actor–observer effect was discovered for engagement/flow and frustration. Correlations between decay rates of the cognitive-affective states and several learning measures confirmed the major predictions and uncovered some novel findings that have implications for theories of pedagogy that integrate cognition and affect during deep learning.


Cognition & Emotion | 2008

Emote aloud during learning with AutoTutor: Applying the Facial Action Coding System to cognitive–affective states during learning

Scotty D. Craig; Sidney K. D'Mello; Amy Witherspoon; Arthur C. Graesser

In an attempt to discover the facial action units for affective states that occur during complex learning, this study adopted an emote-aloud procedure in which participants were recorded as they verbalised their affective states while interacting with an intelligent tutoring system (AutoTutor). Participants’ facial expressions were coded by two expert raters using Ekmans Facial Action Coding System and analysed using association rule mining techniques. The two expert raters received an overall kappa that ranged between .76 and .84. The association rule mining analysis uncovered facial actions associated with confusion, frustration, and boredom. We discuss these rules and the prospects of enhancing AutoTutor with non-intrusive affect-sensitive capabilities.


IEEE Transactions on Autonomous Mental Development | 2014

LIDA: A Systems-level Architecture for Cognition, Emotion, and Learning

Stan Franklin; Tamas Madl; Sidney K. D'Mello; Javier Snaider

We describe a cognitive architecture learning intelligent distribution agent (LIDA) that affords attention, action selection and human-like learning intended for use in controlling cognitive agents that replicate human experiments as well as performing real-world tasks. LIDA combines sophisticated action selection, motivation via emotions, a centrally important attention mechanism, and multimodal instructionalist and selectionist learning. Empirically grounded in cognitive science and cognitive neuroscience, the LIDA architecture employs a variety of modules and processes, each with its own effective representations and algorithms. LIDA has much to say about motivation, emotion, attention, and autonomous learning in cognitive agents. In this paper, we summarize the LIDA model together with its resulting agent architecture, describe its computational implementation, and discuss results of simulations that replicate known experimental data. We also discuss some of LIDAs conceptual modules, propose nonlinear dynamics as a bridge between LIDAs modules and processes and the underlying neuroscience, and point out some of the differences between LIDA and other cognitive architectures. Finally, we discuss how LIDA addresses some of the open issues in cognitive architecture research.


international conference on software engineering | 2014

Improving automated source code summarization via an eye-tracking study of programmers

Paige Rodeghero; Collin McMillan; Paul W. McBurney; Nigel Bosch; Sidney K. D'Mello

Source Code Summarization is an emerging technology for automatically generating brief descriptions of code. Current summarization techniques work by selecting a subset of the statements and keywords from the code, and then including information from those statements and keywords in the summary. The quality of the summary depends heavily on the process of selecting the subset: a high-quality selection would contain the same statements and keywords that a programmer would choose. Unfortunately, little evidence exists about the statements and keywords that programmers view as important when they summarize source code. In this paper, we present an eye-tracking study of 10 professional Java programmers in which the programmers read Java methods and wrote English summaries of those methods. We apply the findings to build a novel summarization tool. Then, we evaluate this tool and provide evidence to support the development of source code summarization systems.


international conference on multimodal interfaces | 2012

Consistent but modest: a meta-analysis on unimodal and multimodal affect detection accuracies from 30 studies

Sidney K. D'Mello; Jacqueline M. Kory

The recent influx of multimodal affect classifiers raises the important question of whether these classifiers yield accuracy rates that exceed their unimodal counterparts. This question was addressed by performing a meta-analysis on 30 published studies that reported both multimodal and unimodal affect detection accuracies. The results indicated that multimodal accuracies were consistently better than unimodal accuracies and yielded an average 8.12% improvement over the best unimodal classifiers. However, performance improvements were three times lower when classifiers were trained on natural or seminatural data (4.39% improvement) compared to acted data (12.1% improvement). Importantly, performance of the best unimodal classifier explained an impressive 80.6% (cross-validated) of the variance in multimodal accuracy. The results also indicated that multimodal accuracies were substantially higher than accuracies of the second-best unimodal classifiers (an average improvement of 29.4%) irrespective of the naturalness of the training data. Theoretical and applied implications of the findings are discussed.


artificial intelligence in education | 2010

Monitoring affect states during effortful problem solving activities

Sidney K. D'Mello; Blair Lehman; Natalie K. Person

We explored the affective states that students experienced during effortful problem solving activities. We conducted a study where 41 students solved difficult analytical reasoning problems from the Law School Admission Test. Students viewed videos of their faces and screen captures and judged their emotions from a set of 14 states (basic emotions, learning-centered emotions, and neutral) at relevant points in the problem solving process (after new problem is displayed, in the midst of problem solving, after feedback is received). The results indicated that curiosity, frustration, boredom, confusion, happiness, and anxiety were the major emotions that students experienced, while contempt, anger, sadness, fear, disgust, eureka, and surprise were rare. Follow-up analyses on the temporal dynamics of the emotions, their contextual underpinnings, and relationships to problem solving outcomes supported a general characterization of the affective dimension of problem solving. Affective states differ in: (a) their probability of occurrence as regular, routine, or sporadic emotions, (b) their temporal dynamics as persistent or random emotions, (c) their characterizations as product or process related emotions, and (d) whether they were positively or negatively related to problem solving outcomes. A synthesis of our major findings, limitations, resolutions, and implications for affect-sensitive artificial learning environments are discussed.


Ai Magazine | 2013

Recent Advances in Conversational Intelligent Tutoring Systems

Vasile Rus; Sidney K. D'Mello; Xiangen Hu; Arthur C. Graesser

We report recent advances in intelligent tutoring systems with conversational dialogue. We highlight progress in terms of macro and microadaptivity. Macroadaptivity refers to a system’s capability to select appropriate instructional tasks for the learner to work on. Microadaptivity refers to a system’s capability to adapt its scaffolding while the learner is working on a particular task. The advances in macro and microadaptivity that are presented here were made possible by the use of learning progressions, deeper dialogue and natural language processing techniques, and by the use of affect-enabled components. Learning progressions and deeper dialogue and natural language processing techniques are key features of DeepTutor, the first intelligent tutoring system based on learning progressions. These improvements extend the bandwidth of possibilities for tailoring instruction to each individual student which is needed for maximizing engagement and ultimately learning.

Collaboration


Dive into the Sidney K. D'Mello's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nigel Bosch

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Caitlin Mills

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Robert Bixler

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryan S. Baker

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge