Featured Researches

Human Computer Interaction

Discussing the Risks of Adaptive Virtual Environments for User Autonomy

Adaptive virtual environments are an opportunity to support users and increase their flow, presence, immersion, and overall experience. Possible fields of application are adaptive individual education, gameplay adjustment, professional work, and personalized content. But who benefits more from this adaptivity, the users who can enjoy a greater user experience or the companies or governments who are completely in control of the provided content. While the user autonomy decreases for individuals, the power of institutions raises, and the risk exists that personal opinions are precisely controlled. In this position paper, we will argue that researchers should not only propose the benefits of their work but also critically discuss what are possible abusive use cases. Therefore, we will examine two use cases in the fields of professional work and personalized content and show possible abusive use.

Read more
Human Computer Interaction

Disparate Impact Diminishes Consumer Trust Even for Advantaged Users

Systems aiming to aid consumers in their decision-making (e.g., by implementing persuasive techniques) are more likely to be effective when consumers trust them. However, recent research has demonstrated that the machine learning algorithms that often underlie such technology can act unfairly towards specific groups (e.g., by making more favorable predictions for men than for women). An undesired disparate impact resulting from this kind of algorithmic unfairness could diminish consumer trust and thereby undermine the purpose of the system. We studied this effect by conducting a between-subjects user study investigating how (gender-related) disparate impact affected consumer trust in an app designed to improve consumers' financial decision-making. Our results show that disparate impact decreased consumers' trust in the system and made them less likely to use it. Moreover, we find that trust was affected to the same degree across consumer groups (i.e., advantaged and disadvantaged users) despite both of these consumer groups recognizing their respective levels of personal benefit. Our findings highlight the importance of fairness in consumer-oriented artificial intelligence systems.

Read more
Human Computer Interaction

Dissecting the Meme Magic: Understanding Indicators of Virality in Image Memes

Despite the increasingly important role played by image memes, we do not yet have a solid understanding of the elements that might make a meme go viral on social media. In this paper, we investigate what visual elements distinguish image memes that are highly viral on social media from those that do not get re-shared, across three dimensions: composition, subjects, and target audience. Drawing from research in art theory, psychology, marketing, and neuroscience, we develop a codebook to characterize image memes, and use it to annotate a set of 100 image memes collected from 4chan's Politically Incorrect Board (/pol/). On the one hand, we find that highly viral memes are more likely to use a close-up scale, contain characters, and include positive or negative emotions. On the other hand, image memes that do not present a clear subject the viewer can focus attention on, or that include long text are not likely to be re-shared by users. We train machine learning models to distinguish between image memes that are likely to go viral and those that are unlikely to be re-shared, obtaining an AUC of 0.866 on our dataset. We also show that the indicators of virality identified by our model can help characterize the most viral memes posted on mainstream online social networks too, as our classifiers are able to predict 19 out of the 20 most popular image memes posted on Twitter and Reddit between 2016 and 2018. Overall, our analysis sheds light on what indicators characterize viral and non-viral visual content online, and set the basis for developing better techniques to create or moderate content that is more likely to catch the viewer's attention.

Read more
Human Computer Interaction

Distributed Synchronous Visualization Design: Challenges and Strategies

We reflect on our experiences as designers of COVID-19 data visualizations working in a distributed synchronous design space during the pandemic. This is especially relevant as the pandemic posed new challenges to distributed collaboration amidst civic lockdown measures and an increased dependency on spatially distributed teamwork across almost all sectors. Working from home being 'the new normal', we explored potential solutions for collaborating and prototyping remotely from our own homes using the existing tools at our disposal. Since members of our cross-disciplinary team had different technical skills, we used a range of synchronous remote design tools and methods. We aimed to preserve the richness of co-located collaboration such as face-to-face physical presence, body gestures, facial expressions, and the making and sharing of physical artifacts. While meeting over Zoom, we sketched on paper and used digital collaboration tools, such as Miro and Google Docs. Using an auto-ethnographic approach, we articulate our challenges and strategies throughout the process, providing useful insights about synchronous distributed collaboration.

Read more
Human Computer Interaction

Drive Safe: Cognitive-Behavioral Mining for Intelligent Transportation Cyber-Physical System

This paper presents a cognitive behavioral-based driver mood repairment platform in intelligent transportation cyber-physical systems (IT-CPS) for road safety. In particular, we propose a driving safety platform for distracted drivers, namely \emph{drive safe}, in IT-CPS. The proposed platform recognizes the distracting activities of the drivers as well as their emotions for mood repair. Further, we develop a prototype of the proposed drive safe platform to establish proof-of-concept (PoC) for the road safety in IT-CPS. In the developed driving safety platform, we employ five AI and statistical-based models to infer a vehicle driver's cognitive-behavioral mining to ensure safe driving during the drive. Especially, capsule network (CN), maximum likelihood (ML), convolutional neural network (CNN), Apriori algorithm, and Bayesian network (BN) are deployed for driver activity recognition, environmental feature extraction, mood recognition, sequential pattern mining, and content recommendation for affective mood repairment of the driver, respectively. Besides, we develop a communication module to interact with the systems in IT-CPS asynchronously. Thus, the developed drive safe PoC can guide the vehicle drivers when they are distracted from driving due to the cognitive-behavioral factors. Finally, we have performed a qualitative evaluation to measure the usability and effectiveness of the developed drive safe platform. We observe that the P-value is 0.0041 (i.e., < 0.05) in the ANOVA test. Moreover, the confidence interval analysis also shows significant gains in prevalence value which is around 0.93 for a 95% confidence level. The aforementioned statistical results indicate high reliability in terms of driver's safety and mental state.

Read more
Human Computer Interaction

Drone Control based on Mental Commands and Facial Expressions

When it is tried to control drones, there are many different ways through various devices, using either motions like facial motion, special gloves with sensors, red, green, blue cameras on the laptop or even using smartwatches by performing gestures that are picked up by motion sensors. The paper proposes a work on how drones could be controlled using brainwaves without any of those devices. The drone control system of the current research was developed using electroencephalogram signals took by an Emotiv Insight headset. The electroencephalogram signals are collected from the users brain. The processed signal is then sent to the computer via Bluetooth. The headset employs Bluetooth Low Energy for wireless transmission. The brain of the user is trained in order to use the generated electroencephalogram data. The final signal is transmitted to Raspberry Pi zero via the MQTT messaging protocol. The Raspberry Pi controls the movement of the drone through the incoming signal from the headset. After years, brain control can replace many normal input sources like keyboards, touch screens or other traditional ways, so it enhances interactive experiences and provides new ways for disabled people to engage with their surroundings.

Read more
Human Computer Interaction

E-cheating Prevention Measures: Detection of Cheating at Online Examinations Using Deep Learning Approach -- A Case Study

This study addresses the current issues in online assessments, which are particularly relevant during the Covid-19 pandemic. Our focus is on academic dishonesty associated with online assessments. We investigated the prevalence of potential e-cheating using a case study and propose preventive measures that could be implemented. We have utilised an e-cheating intelligence agent as a mechanism for detecting the practices of online cheating, which is composed of two major modules: the internet protocol (IP) detector and the behaviour detector. The intelligence agent monitors the behaviour of the students and has the ability to prevent and detect any malicious practices. It can be used to assign randomised multiple-choice questions in a course examination and be integrated with online learning programs to monitor the behaviour of the students. The proposed method was tested on various data sets confirming its effectiveness. The results revealed accuracies of 68% for the deep neural network (DNN); 92% for the long-short term memory (LSTM); 95% for the DenseLSTM; and, 86% for the recurrent neural network (RNN).

Read more
Human Computer Interaction

EEG-based Investigation of the Impact of Classroom Design on Cognitive Performance of Students

This study investigated the neural dynamics associated with short-term exposure to different virtual classroom designs with different window placement and room dimension. Participants engaged in five brief cognitive tasks in each design condition including the Stroop Test, the Digit Span Test, the Benton Test, a Visual Memory Test, and an Arithmetic Test. Performance on the cognitive tests and Electroencephalogram (EEG) data were analyzed by contrasting various classroom design conditions. The cognitive-test-performance results showed no significant differences related to the architectural design features studied. We computed frequency band-power and connectivity EEG features to identify neural patterns associated to environmental conditions. A leave one out machine learning classification scheme was implemented to assess the robustness of the EEG features, with the classification accuracy evaluation of the trained model repeatedly performed against an unseen participant's data. The classification results located consistent differences in the EEG features across participants in the different classroom design conditions, with a predictive power that was significantly higher compared to a baseline classification learning outcome using scrambled data. These findings were most robust during the Visual Memory Test, and were not found during the Stroop Test and the Arithmetic Test. The most discriminative EEG features were observed in bilateral occipital, parietal, and frontal regions in the theta and alpha frequency bands. While the implications of these findings for student learning are yet to be determined, this study provides rigorous evidence that brain activity features during cognitive tasks are affected by the design elements of window placement and room dimensions.

Read more
Human Computer Interaction

EEGFuseNet: Hybrid Unsupervised Deep Feature Characterization and Fusion for High-Dimensional EEG with An Application to Emotion Recognition

How to effectively and efficiently extract valid and reliable features from high-dimensional electroencephalography (EEG), particularly how to fuse the spatial and temporal dynamic brain information into a better feature representation, is a critical issue in brain data analysis. Most current EEG studies are working on handcrafted features with a supervised modeling, which would be limited by experience and human feedbacks to a great extent. In this paper, we propose a practical hybrid unsupervised deep CNN-RNN-GAN based EEG feature characterization and fusion model, which is termed as EEGFuseNet. EEGFuseNet is trained in an unsupervised manner, and deep EEG features covering spatial and temporal dynamics are automatically characterized. Comparing to the handcrafted features, the deep EEG features could be considered to be more generic and independent of any specific EEG task. The performance of the extracted deep and low-dimensional features by EEGFuseNet is carefully evaluated in an unsupervised emotion recognition application based on a famous public emotion database. The results demonstrate the proposed EEGFuseNet is a robust and reliable model, which is easy to train and manage and perform efficiently in the representation and fusion of dynamic EEG features. In particular, EEGFuseNet is established as an optimal unsupervised fusion model with promising subject-based leave-one-out results in the recognition of four emotion dimensions (valence, arousal, dominance and liking), which demonstrates the possibility of realizing EEG based cross-subject emotion recognition in a pure unsupervised manner.

Read more
Human Computer Interaction

EUCA: A Practical Prototyping Framework towards End-User-Centered Explainable Artificial Intelligence

The ability to explain decisions to its end-users is a necessity to deploy AI as critical decision support. Yet making AI explainable to end-users is a relatively ignored and challenging problem. To bridge the gap, we first identified twelve end-user-friendly explanatory forms that do not require technical knowledge to comprehend, including feature-, example-, and rule-based explanations. We then instantiated the explanatory forms as prototyping cards in four AI-assisted critical decision-making tasks, and conducted a user study to co-design low-fidelity prototypes with 32 layperson participants. The results verified the relevance of using the explanatory forms as building blocks of explanations, and identified their proprieties (pros, cons, applicable explainability needs, and design implications). The explanatory forms, their proprieties, and prototyping support constitute the End-User-Centered explainable AI framework EUCA. It serves as a practical prototyping toolkit for HCI/AI practitioners and researchers to build end-user-centered explainable AI. The EUCA framework is available at this http URL

Read more

Ready to get started?

Join us today