Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Phuong Pham is active.

Publication


Featured researches published by Phuong Pham.


artificial intelligence in education | 2015

AttentiveLearner: Improving Mobile MOOC Learning via Implicit Heart Rate Tracking

Phuong Pham; Jingtao Wang

We present AttentiveLearner, an intelligent mobile learning system optimized for consuming lecture videos in both Massive Open Online Courses (MOOCs) and flipped classrooms. AttentiveLearner uses on-lens finger gestures as an intuitive control channel for video playback. More importantly, AttentiveLearner implicitly extracts learners’ heart rates and infers their attention by analyzing learners’ fingertip transparency changes during learning on today’s unmodified smart phones. In a 24-participant study, we found heart rates extracted from noisy image frames via mobile cameras can be used to predict both learners’ “mind wandering” events in MOOC sessions and their performance in follow-up quizzes. The prediction performance of AttentiveLearner (accuracy = 71.22%, kappa = 0.22) is comparable with existing research using dedicated sensors. AttentiveLearner has the potential to improve mobile learning by reducing the sensing equipment required by many state-of-the-art intelligent tutoring algorithms.


international conference on multimodal interfaces | 2016

Adaptive review for mobile MOOC learning via implicit physiological signal sensing

Phuong Pham; Jingtao Wang

Massive Open Online Courses (MOOCs) have the potential to enable high quality knowledge dissemination in large scale at low cost. However, todays MOOCs also suffer from low engagement, uni-directional information flow, and lack of personalization. In this paper, we propose AttentiveReview, an effective intervention technology for mobile MOOC learning. AttentiveReview infers a learners perceived difficulty levels of the corresponding learning materials via implicit photoplethysmography (PPG) sensing on unmodified smartphones. AttentiveReview also recommends personalized review sessions through a user-independent model. In a 32-participant user study, we found that: 1) AttentiveReview significantly improved information recall (+14.6%) and learning gain (+17.4%) when compared with the no review condition; 2) AttentiveReview also achieved comparable performances at significantly less time when compared with the full review condition; 3) As an end-to-end mobile tutoring system, the benefits of AttentiveReview outweigh side-effects from false positives and false negatives. Overall, we show that it is feasible to improve mobile MOOC learning by recommending review materials adaptively from rich but noisy physiological signals.


artificial intelligence in education | 2017

AttentiveLearner2: A Multimodal Approach for Improving MOOC Learning on Mobile Devices.

Phuong Pham; Jingtao Wang

We propose AttentiveLearner2, a multimodal mobile learning system for MOOCs running on unmodified smartphones. AttentiveLearner2 uses both the front and back cameras of a smartphone as two complementary and fine-grained feedback channels in real time: the back camera monitors learners’ photoplethysmography (PPG) signals and the front camera tracks their facial expressions during MOOC learning. AttentiveLearner2 implicitly infers learners’ affective and cognitive states during learning by analyzing learners’ PPG signals and facial expressions. In a 26-participant user study, we found that it is feasible to detect 6 types of emotion during learning via collected PPG signals and facial expressions and these modalities are complement with each other.


artificial intelligence in education | 2017

Dynamics of Affective States During MOOC Learning.

Xiang Xiao; Phuong Pham; Jingtao Wang

We investigate the temporal dynamics of learners’ affective states (e.g., engagement, boredom, confusion, frustration, etc.) during video-based learning sessions in Massive Open Online Courses (MOOCs) in a 22-participant user study. We also show the feasibility of predicting learners’ moment-to-moment affective states via implicit photoplethysmography (PPG) sensing on unmodified smartphones.


intelligent tutoring systems | 2018

Predicting Learners’ Emotions in Mobile MOOC Learning via a Multimodal Intelligent Tutor

Phuong Pham; Jingtao Wang

Massive Open Online Courses (MOOCs) are a promising approach for scalable knowledge dissemination. However, they also face major challenges such as low engagement, low retention rate, and lack of personalization. We propose AttentiveLearner2, a multimodal intelligent tutor running on unmodified smartphones, to supplement today’s clickstream-based learning analytics for MOOCs. AttentiveLearner2 uses both the front and back cameras of a smartphone as two complementary and fine-grained feedback channels in real time: the back camera monitors learners’ photoplethysmography (PPG) signals and the front camera tracks their facial expressions during MOOC learning. AttentiveLearner2 implicitly infers learners’ affective and cognitive states during learning from their PPG signals and facial expressions. Through a 26-participant user study, we found that: (1) AttentiveLearner2 can detect 6 emotions in mobile MOOC learning reliably with high accuracy (average accuracy = 84.4%); (2) the detected emotions can predict learning outcomes (best R2 = 50.6%); and (3) it is feasible to track both PPG signals and facial expressions in real time in a scalable manner on today’s unmodified smartphones.


intelligent user interfaces | 2017

Understanding Emotional Responses to Mobile Video Advertisements via Physiological Signal Sensing and Facial Expression Analysis

Phuong Pham; Jingtao Wang


Journal of the American Medical Informatics Association | 2018

NLPReViz: an interactive tool for natural language processing on clinical text

Gaurav Trivedi; Phuong Pham; Wendy W. Chapman; Rebecca Hwa; Janyce Wiebe; Harry Hochheiser


arXiv: Human-Computer Interaction | 2017

An Interactive Tool for Natural Language Processing on Clinical Text

Gaurav Trivedi; Phuong Pham; Wendy W. Chapman; Rebecca Hwa; Janyce Wiebe; Harry Hochheiser


international conference on multimodal interfaces | 2015

AttentiveLearner: Adaptive Mobile MOOC Learning via Implicit Cognitive States Inference

Xiang Xiao; Phuong Pham; Jingtao Wang


international conference on acoustics, speech, and signal processing | 2018

Eventness: Object Detection on Spectrograms for Temporal Localization of Audio Events.

Phuong Pham; Juncheng Li; Joseph Szurley; Samarjit Das

Collaboration


Dive into the Phuong Pham's collaboration.

Top Co-Authors

Avatar

Jingtao Wang

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Gaurav Trivedi

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Janyce Wiebe

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Rebecca Hwa

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiang Xiao

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph Szurley

Katholieke Universiteit Leuven

View shared research outputs
Researchain Logo
Decentralizing Knowledge