Featured Researches

Human Computer Interaction

Explainable AI and Adoption of Financial Algorithmic Advisors: an Experimental Study

We study whether receiving advice from either a human or algorithmic advisor, accompanied by five types of Local and Global explanation labelings, has an effect on the readiness to adopt, willingness to pay, and trust in a financial AI consultant. We compare the differences over time and in various key situations using a unique experimental framework where participants play a web-based game with real monetary consequences. We observed that accuracy-based explanations of the model in initial phases leads to higher adoption rates. When the performance of the model is immaculate, there is less importance associated with the kind of explanation for adoption. Using more elaborate feature-based or accuracy-based explanations helps substantially in reducing the adoption drop upon model failure. Furthermore, using an autopilot increases adoption significantly. Participants assigned to the AI-labeled advice with explanations were willing to pay more for the advice than the AI-labeled advice with a No-explanation alternative. These results add to the literature on the importance of XAI for algorithmic adoption and trust.

Read more
Human Computer Interaction

Explainable Patterns: Going from Findings to Insights to Support Data Analytics Democratization

In the past decades, massive efforts involving companies, non-profit organizations, governments, and others have been put into supporting the concept of data democratization, promoting initiatives to educate people to confront information with data. Although this represents one of the most critical advances in our free world, access to data without concrete facts to check or the lack of an expert to help on understanding the existing patterns hampers its intrinsic value and lessens its democratization. So the benefits of giving full access to data will only be impactful if we go a step further and support the Data Analytics Democratization, assisting users in transforming findings into insights without the need of domain experts to promote unconstrained access to data interpretation and verification. In this paper, we present Explainable Patterns (ExPatt), a new framework to support lay users in exploring and creating data storytellings, automatically generating plausible explanations for observed or selected findings using an external (textual) source of information, avoiding or reducing the need for domain experts. ExPatt applicability is confirmed via different use-cases involving world demographics indicators and Wikipedia as an external source of explanations, showing how it can be used in practice towards the data analytics democratization.

Read more
Human Computer Interaction

Explainable Spatial Clustering: Leveraging Spatial Data in Radiation Oncology

Advances in data collection in radiation therapy have led to an abundance of opportunities for applying data mining and machine learning techniques to promote new data-driven insights. In light of these advances, supporting collaboration between machine learning experts and clinicians is important for facilitating better development and adoption of these models. Although many medical use-cases rely on spatial data, where understanding and visualizing the underlying structure of the data is important, little is known about the interpretability of spatial clustering results by clinical audiences. In this work, we reflect on the design of visualizations for explaining novel approaches to clustering complex anatomical data from head and neck cancer patients. These visualizations were developed, through participatory design, for clinical audiences during a multi-year collaboration with radiation oncologists and statisticians. We distill this collaboration into a set of lessons learned for creating visual and explainable spatial clustering for clinical users.

Read more
Human Computer Interaction

Exploratory Analysis of File System Metadata for Rapid Investigation of Security Incidents

Investigating cybersecurity incidents requires in-depth knowledge from the analyst. Moreover, the whole process is demanding due to the vast data volumes that need to be analyzed. While various techniques exist nowadays to help with particular tasks of the analysis, the process as a whole still requires a lot of manual activities and expert skills. We propose an approach that allows the analysis of disk snapshots more efficiently and with lower demands on expert knowledge. Following a user-centered design methodology, we implemented an analytical tool to guide analysts during security incident investigations. The viability of the solution was validated by an evaluation conducted with members of different security teams.

Read more
Human Computer Interaction

Exploring Asymmetric Roles in Mixed-Ability Gaming

The landscape of digital games is segregated by player ability. For example, sighted players have a multitude of highly visual games at their disposal, while blind players may choose from a variety of audio games. Attempts at improving cross-ability access to any of those are often limited in the experience they provide, or disregard multiplayer experiences. We explore ability-based asymmetric roles as a design approach to create engaging and challenging mixed-ability play. Our team designed and developed two collaborative testbed games exploring asymmetric interdependent roles. In a remote study with 13 mixed-visual-ability pairs we assessed how roles affected perceptions of engagement, competence, and autonomy, using a mixed-methods approach. The games provided an engaging and challenging experience, in which differences in visual ability were not limiting. Our results underline how experiences unequal by design can give rise to an equitable joint experience.

Read more
Human Computer Interaction

Exploring Design and Governance Challenges in the Development of Privacy-Preserving Computation

Homomorphic encryption, secure multi-party computation, and differential privacy are part of an emerging class of Privacy Enhancing Technologies which share a common promise: to preserve privacy whilst also obtaining the benefits of computational analysis. Due to their relative novelty, complexity, and opacity, these technologies provoke a variety of novel questions for design and governance. We interviewed researchers, developers, industry leaders, policymakers, and designers involved in their deployment to explore motivations, expectations, perceived opportunities and barriers to adoption. This provided insight into several pertinent challenges facing the adoption of these technologies, including: how they might make a nebulous concept like privacy computationally tractable; how to make them more usable by developers; and how they could be explained and made accountable to stakeholders and wider society. We conclude with implications for the development, deployment, and responsible governance of these privacy-preserving computation techniques.

Read more
Human Computer Interaction

Exploring How Personality Models Information Visualization Preferences

Recent research on information visualization has shown how individual differences act as a mediator on how users interact with visualization systems. We focus our exploratory study on whether personality has an effect on user preferences regarding idioms used for hierarchy, evolution over time, and comparison contexts. Specifically, we leverage all personality variables from the Five-Factor Model and the three dimensions from Locus of Control (LoC) with correlation and clustering approaches. The correlation-based method suggested that Neuroticism, Openness to Experience, Agreeableness, several facets from each trait, and the External dimensions from LoC mediate how much individuals prefer certain idioms. In addition, our results from the cluster-based analysis showed that Neuroticism, Extraversion, Conscientiousness, and all dimensions from LoC have an effect on preferences for idioms in hierarchy and evolution contexts. Our results support the incorporation of in-depth personality synergies with InfoVis into the design pipeline of visualization systems.

Read more
Human Computer Interaction

Exploring Lightweight Interventions at Posting Time to Reduce the Sharing of Misinformation on Social Media

When users on social media share content without considering its veracity, they may unwittingly be spreading misinformation. In this work, we investigate the design of lightweight interventions that nudge users to assess the accuracy of information as they share it. Such assessment may deter users from posting misinformation in the first place, and their assessments may also provide useful guidance to friends aiming to assess those posts themselves. In support of lightweight assessment, we first develop a taxonomy of the reasons why people believe a news claim is or is not true; this taxonomy yields a checklist that can be used at posting time. We conduct evaluations to demonstrate that the checklist is an accurate and comprehensive encapsulation of people's free-response rationales. In a second experiment, we study the effects of three behavioral nudges -- 1) checkboxes indicating whether headings are accurate, 2) tagging reasons (from our taxonomy) that a post is accurate via a checklist and 3) providing free-text rationales for why a headline is or is not accurate -- on people's intention of sharing the headline on social media. From an experiment with 1668 participants, we find that both providing accuracy assessment and rationale reduce the sharing of false content. They also reduce the sharing of true content, but to a lesser degree that yields an overall decrease in the fraction of shared content that is false. Our findings have implications for designing social media and news sharing platforms that draw from richer signals of content credibility contributed by users. In addition, our validated taxonomy can be used by platforms and researchers as a way to gather rationales in an easier fashion than free-response.

Read more
Human Computer Interaction

Exploring Navigation Styles in a FutureLearn MOOC

This paper presents for the first time a detailed analysis of fine-grained navigation style identification in MOOCs backed by a large number of active learners. The result shows 1) whilst the sequential style is clearly in evidence, the global style is less prominent; 2) the majority of the learners do not belong to either category; 3) navigation styles are not as stable as believed in the literature; and 4) learners can, and do, swap between navigation styles with detrimental effects. The approach is promising, as it provides insight into online learners' temporal engagement, as well as a tool to identify vulnerable learners, which potentially benefit personalised interventions (from teachers or automatic help) in Intelligent Tutoring Systems (ITS).

Read more
Human Computer Interaction

Exploring Semi-Supervised Learning for Predicting Listener Backchannels

Developing human-like conversational agents is a prime area in HCI research and subsumes many tasks. Predicting listener backchannels is one such actively-researched task. While many studies have used different approaches for backchannel prediction, they all have depended on manual annotations for a large dataset. This is a bottleneck impacting the scalability of development. To this end, we propose using semi-supervised techniques to automate the process of identifying backchannels, thereby easing the annotation process. To analyze our identification module's feasibility, we compared the backchannel prediction models trained on (a) manually-annotated and (b) semi-supervised labels. Quantitative analysis revealed that the proposed semi-supervised approach could attain 95% of the former's performance. Our user-study findings revealed that almost 60% of the participants found the backchannel responses predicted by the proposed model more natural. Finally, we also analyzed the impact of personality on the type of backchannel signals and validated our findings in the user-study.

Read more

Ready to get started?

Join us today