Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mohammad Rafayet Ali is active.

Publication


Featured researches published by Mohammad Rafayet Ali.


affective computing and intelligent interaction | 2015

LISSA — Live Interactive Social Skill Assistance

Mohammad Rafayet Ali; Dev Crasta; Li Jin; Agustin Baretto; Joshua Pachter; Ronald D. Rogge; Mohammed E. Hoque

We present LISSA - Live Interactive Social Skill Assistance - a web-based system that helps people practice their conversational skills by having short conversations with a human like virtual agent and receiving real-time feedback on their nonverbal behavior. In this paper, we describe the development of an interface for these features and examine the viability of real time feedback using a Wizard of Oz prototype. We then evaluated our system using a speed-dating study design. We invited 47 undergraduate male students to interact with staff and randomly assigned them to intervention with LISSA or a self-help control group. Results suggested participants who practiced with the LISSA system were rated as significantly better in nodding when compared to a self-help control group and marginally better in eye contact and gesturing. The system usability and surveys showed that participants found the feedback provided by the system useful, unobtrusive, and easy to understand.


ubiquitous computing | 2016

ROC comment: automated descriptive and subjective captioning of behavioral videos

Mohammad Rafayet Ali; Facundo Ciancio; Ru Zhao; Iftekhar Naim; Mohammed E. Hoque

We present an automated interface, ROC Comment, for generating natural language comments on behavioral videos. We focus on the domain of public speaking, which many people consider their greatest fear. We collect a dataset of 196 public speaking videos from 49 individuals and gather 12,173 comments, generated by more than 500 independent human judges. We then train a k-Nearest-Neighbor (k-NN) based model by extracting prosodic (e.g., volume) and facial (e.g., smiles) features. Given a new video, we extract features and select the closest comments using k-NN model. We further filter the comments by clustering them using DBScan, and eliminating the outliers. Evaluation of our system with 30 participants conclude that while the generated comments are helpful, there is room for improvement in further personalizing them. Our model has been deployed online, allowing individuals to upload their videos and receive open-ended and interpretative comments. Our system is available at http://tinyurl.com/roccomment.


intelligent virtual agents | 2016

The LISSA Virtual Human and ASD Teens: An Overview of Initial Experiments

Seyedeh Zahra Razavi; Mohammad Rafayet Ali; Tristram Smith; Lenhart K. Schubert; Mohammed E. Hoque

We summarize an exploratory investigation into using an autonomous conversational agent for improving the communication skills of teenagers with autism. The system conducts a natural conversation with the user and gives real-time and post-session feedback on the user’s nonverbal behavior. We obtained promising results and ideas for improvements in preliminary experiments with five autism spectrum disorder teens.


international symposium on wearable computers | 2017

Social skills training with virtual assistant and real-time feedback

Mohammad Rafayet Ali; Ehsan Hoque

Nonverbal cues are considered the most important part in social communication. Many people desire people; but due to the stigma and unavailability of resources, they are unable to practice their social skills. In this work, we envision a virtual assistant that can give individuals real-time feedback on their smiles, eye-contact, body language and volume modulation that is available anytime, anywhere using a computer browser. To instantiate our idea, we have set up a Wizard-of-Oz study in the context of speed-dating with 47 individuals. We collected videos of the participants having a conversation with a virtual agent before and after of a speed-dating session. This study revealed that the participants who used our system improved their gesture in a face-to-face conversation. Our next goal is to explore different machine learning techniques on the facial and prosodic features to automatically generate feedback on the nonverbal cues. In addition, we want to explore different strategies of conveying real-time feedback that is non-threatening, repeatable, objective and more likely to transfer to a real-world conversation.


affective computing and intelligent interaction | 2015

Automated conversation skills assistant

Mohammad Rafayet Ali

Conversational skills training are getting popular now a days but often very hard to get due to expense and lack of accessibility. In this paper, we present the idea of an automated conversational skills training assistant, which provides both realtime and post summary feedback while having a conversation with a virtual agent. Our exploratory effort shows the applicability of this system and significant improvement of our participants conversational skills over a control group. We present our intended goals and the methodologies that we will use on our collected training data in order to automate this system in future.


intelligent user interfaces | 2018

Aging and Engaging: A Social Conversational Skills Training Program for Older Adults

Mohammad Rafayet Ali; Kimberly A. Van Orden; Kimberly Parkhurst; Shuyang Liu; Viet-Duy Nguyen; Paul R. Duberstein; M. Ehsan Hoque


ieee international conference on automatic face gesture recognition | 2018

The What, When, and Why of Facial Expressions: An Objective Analysis of Conversational Skills in Speed-Dating Videos

Mohammad Rafayet Ali; Taylan Sen; Dev Crasta; Viet-Duy Nguyen; Ronald D. Rogge; Mohammed E. Hoque


ieee international conference on automatic face gesture recognition | 2018

Analyzing the Impact of Gender on the Automation of Feedback for Public Speaking

Astha Singhal; Mohammad Rafayet Ali; Raiyan Abdul Baten; Chigusa Kurumada; Elizabeth West Marvin; Mohammed E. Hoque


Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies archive | 2018

CoCo: Collaboration Coach for Understanding Team Dynamics during Video Conferencing

Samiha Samrose; Ru Zhao; Jeffery White; Vivian Li; Luis Nova; Yichen Lu; Mohammad Rafayet Ali; Mohammed E. Hoque


affective computing and intelligent interaction | 2017

Modeling doctor-patient communication with affective text analysis

Taylan Sen; Mohammad Rafayet Ali; Mohammed E. Hoque; Ronald M. Epstein; Paul R. Duberstein

Collaboration


Dive into the Mohammad Rafayet Ali's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dev Crasta

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ru Zhao

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Taylan Sen

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ehsan Hoque

University of Rochester

View shared research outputs
Researchain Logo
Decentralizing Knowledge