Featured Researches

Human Computer Interaction

Affective State Recognition through EEG Signals Feature Level Fusion and Ensemble Classifier

Human affects are complex paradox and an active research domain in affective computing. Affects are traditionally determined through a self-report based psychometric questionnaire or through facial expression recognition. However, few state-of-the-arts pieces of research have shown the possibilities of recognizing human affects from psychophysiological and neurological signals. In this article, electroencephalogram (EEG) signals are used to recognize human affects. The electroencephalogram (EEG) of 100 participants are collected where they are given to watch one-minute video stimuli to induce different affective states. The videos with emotional tags have a variety range of affects including happy, sad, disgust, and peaceful. The experimental stimuli are collected and analyzed intensively. The interrelationship between the EEG signal frequencies and the ratings given by the participants are taken into consideration for classifying affective states. Advanced feature extraction techniques are applied along with the statistical features to prepare a fused feature vector of affective state recognition. Factor analysis methods are also applied to select discriminative features. Finally, several popular supervised machine learning classifier is applied to recognize different affective states from the discriminative feature vector. Based on the experiment, the designed random forest classifier produces 89.06% accuracy in classifying four basic affective states.

Read more
Human Computer Interaction

AffectiveSpotlight: Facilitating the Communication of Affective Responses from Audience Members during Online Presentations

The ability to monitor audience reactions is critical when delivering presentations. However, current videoconferencing platforms offer limited solutions to support this. This work leverages recent advances in affect sensing to capture and facilitate communication of relevant audience signals. Using an exploratory survey (N = 175), we assessed the most relevant audience responses such as confusion, engagement, and head-nods. We then implemented AffectiveSpotlight, a Microsoft Teams bot that analyzes facial responses and head gestures of audience members and dynamically spotlights the most expressive ones. In a within-subjects study with 14 groups (N = 117), we observed that the system made presenters significantly more aware of their audience, speak for a longer period of time, and self-assess the quality of their talk more similarly to the audience members, compared to two control conditions (randomly-selected spotlight and default platform UI). We provide design recommendations for future affective interfaces for online presentations based on feedback from the study.

Read more
Human Computer Interaction

AirWare: Utilizing Embedded Audio and Infrared Signals for In-Air Hand-Gesture Recognition

We introduce AirWare, an in-air hand-gesture recognition system that uses the already embedded speaker and microphone in most electronic devices, together with embedded infrared proximity sensors. Gestures identified by AirWare are performed in the air above a touchscreen or a mobile phone. AirWare utilizes convolutional neural networks to classify a large vocabulary of hand gestures using multi-modal audio Doppler signatures and infrared (IR) sensor information. As opposed to other systems which use high frequency Doppler radars or depth cameras to uniquely identify in-air gestures, AirWare does not require any external sensors. In our analysis, we use openly available APIs to interface with the Samsung Galaxy S5 audio and proximity sensors for data collection. We find that AirWare is not reliable enough for a deployable interaction system when trying to classify a gesture set of 21 gestures, with an average true positive rate of only 50.5% per gesture. To improve performance, we train AirWare to identify subsets of the 21 gestures vocabulary based on possible usage scenarios. We find that AirWare can identify three gesture sets with average true positive rate greater than 80% using 4--7 gestures per set, which comprises a vocabulary of 16 unique in-air gestures.

Read more
Human Computer Interaction

Ajalon: Simplifying the Authoring of Wearable Cognitive Assistants

Wearable Cognitive Assistance (WCA) amplifies human cognition in real time through a wearable device and low-latency wireless access to edge computing infrastructure. It is inspired by, and broadens, the metaphor of GPS navigation tools that provide real-time step-by-step guidance, with prompt error detection and correction. WCA applications are likely to be transformative in education, health care, industrial troubleshooting, manufacturing, and many other areas. Today, WCA application development is difficult and slow, requiring skills in areas such as machine learning and computer vision that are not widespread among software developers. This paper describes Ajalon, an authoring toolchain for WCA applications that reduces the skill and effort needed at each step of the development pipeline. Our evaluation shows that Ajalon significantly reduces the effort needed to create new WCA applications.

Read more
Human Computer Interaction

Alexa Depression and Anxiety Self-tests: A Preliminary Analysis of User Experience and Trust

Mental health resources available via websites and mobile apps provide support such as advice, journaling, and elements from cognitive behavioral therapy. The proliferation of spoken conversational agents, such as Alexa, Siri, and Google Home, has led to an increasing interest in developing mental health apps for these devices. We present the pilot study outcomes of an Alexa Skill that allows users to conduct depression and anxiety self-tests. Ten participants were given access to the Alexa Skill for two-weeks, followed by an online evaluation of the Skill's usability and trust. Our preliminary evaluation suggests that participants trusted the Skill and scored the usability and user experience as average. Usage of the Skill was low, with most participants using the Skill only once. In view of work-in-progress, we also present a discussion of implementation and study design challenges to guide the current literature on designing spoken conversational agents for mental health applications.

Read more
Human Computer Interaction

Alfie: An Interactive Robot with a Moral Compass

This work introduces Alfie, an interactive robot that is capable of answering moral (deontological) questions of a user. The interaction of Alfie is designed in a way in which the user can offer an alternative answer when the user disagrees with the given answer so that Alfie can learn from its interactions. Alfie's answers are based on a sentence embedding model that uses state-of-the-art language models, e.g. Universal Sentence Encoder and BERT. Alfie is implemented on a Furhat Robot, which provides a customizable user interface to design a social robot.

Read more
Human Computer Interaction

All Factors Should Matter! Reference Checklist for Describing Research Conditions in Pursuit of Comparable IVR Experiments

A significant problem with immersive virtual reality (IVR) experiments is the ability to compare research conditions. VR kits and IVR environments are complex and diverse but researchers from different fields, e.g. ICT, psychology, or marketing, often neglect to describe them with a level of detail sufficient to situate their research on the IVR landscape. Careful reporting of these conditions may increase the applicability of research results and their impact on the shared body of knowledge on HCI and IVR. Based on literature review, our experience, practice and a synthesis of key IVR factors, in this article we present a reference checklist for describing research conditions of IVR experiments. Including these in publications will contribute to the comparability of IVR research and help other researchers decide to what extent reported results are relevant to their own research goals. The compiled checklist is a ready-to-use reference tool and takes into account key hardware, software and human factors as well as diverse factors connected to visual, audio, tactile, and other aspects of interaction.

Read more
Human Computer Interaction

An Augmented Reality Interaction Interface for Autonomous Drone

Human drone interaction in autonomous navigation incorporates spatial interaction tasks, including reconstructed 3D map from the drone and human desired target position. Augmented Reality (AR) devices can be powerful interactive tools for handling these spatial interactions. In this work, we build an AR interface that displays the reconstructed 3D map from the drone on physical surfaces in front of the operator. Spatial target positions can be further set on the 3D map by intuitive head gaze and hand gesture. The AR interface is deployed to interact with an autonomous drone to explore an unknown environment. A user study is further conducted to evaluate the overall interaction performance.

Read more
Human Computer Interaction

An Examination of Grouping and Spatial Organization Tasks for High-Dimensional Data Exploration

How do analysts think about grouping and spatial operations? This overarching question incorporates a number of points for investigation, including understanding how analysts begin to explore a dataset, the types of grouping/spatial structures created and the operations performed on them, the relationship between grouping and spatial structures, the decisions analysts make when exploring individual observations, and the role of external information. This work contributes the design and results of such a study, in which a group of participants are asked to organize the data contained within an unfamiliar quantitative dataset. We identify several overarching approaches taken by participants to design their organizational space, discuss the interactions performed by the participants, and propose design recommendations to improve the usability of future high-dimensional data exploration tools that make use of grouping (clustering) and spatial (dimension reduction) operations.

Read more
Human Computer Interaction

An Overview of Enhancing Distance Learning Through Augmented and Virtual Reality Technologies

Although distance learning presents a number of interesting educational advantages as compared to in-person instruction, it is not without its downsides. We first assess the educational challenges presented by distance learning as a whole, and identify 4 main challenges that distance learning currently presents as compared to in-person instruction: the lack of social interaction, reduced student engagement and focus, reduced comprehension and information retention, and the lack of flexible and customizable instructor resources. After assessing each of these challenges in-depth, we examine how AR/VR technologies might serve to address each challenge along with their current shortcomings, and finally outline the further research that is required to fully understand the potential of AR/VR technologies as they apply to distance learning.

Read more

Ready to get started?

Join us today