Featured Researches

Human Computer Interaction

How do Visualization Designers Think? Design Cognition as a Core Aspect of Visualization Psychology

There are numerous opportunities for engaging in research at the intersection of psychology and visualization. While most opportunities taken up by the VIS community will likely focus on the psychology of users, there are also opportunities for studying the psychology of designers. In this position paper, I argue the importance of studying design cognition as a necessary component of a holistic program of research on visualization psychology. I provide a brief overview of research on design cognition in other disciplines, and discuss opportunities for VIS to build an analogous research program. Doing so can lead to a stronger integration of research and design practice, can provide a better understanding of how to educate and train future designers, and will likely surface both challenges and opportunities for future research.

Read more
Human Computer Interaction

How the Design of YouTube Influences User Sense of Agency

In the attention economy, video apps employ design mechanisms like autoplay that exploit psychological vulnerabilities to maximize watch time. Consequently, many people feel a lack of agency over their app use, which is linked to negative life effects such as loss of sleep. Prior design research has innovated external mechanisms that police multiple apps, such as lockout timers. In this work, we shift the focus to how the internal mechanisms of an app can support user agency, taking the popular YouTube mobile app as a test case. From a survey of 120 U.S. users, we find that autoplay and recommendations primarily undermine sense of agency, while search and playlists support it. From 13 co-design sessions, we find that when users have a specific intention for how they want to use YouTube they prefer interfaces that support greater agency. We discuss implications for how designers can help users reclaim a sense of agency over their media use.

Read more
Human Computer Interaction

How to Improve Your Virtual Experience -- Exploring the Obstacles of Mainstream VR

What is Virtual Reality? A professional tool, made to facilitate our everyday tasks? A conceptual mistake, accompanied by cybersickness and unsolved locomotion issues since the very beginning? Or just another source of entertainment that helps us escape from our deteriorating world? The public and scientific opinions in this respect are diverse. Furthermore, as researchers, we sometimes ask ourselves whether our work in this area is really "worth it", given the ambiguous prognosis regarding the future of VR. To tackle this question, we explore three different areas of VR research in this dissertation, namely locomotion, interaction, and perception. We begin our journey by structuring VR locomotion and by introducing a novel locomotion concept for large distance traveling via virtual body resizing. In the second part, we focus on our interaction possibilities in VR. We learn how to represent virtual objects via self-transforming controllers and how to store our items in VR inventories. We design comprehensive 3D gestures for the audience and provide an I/O abstraction layer to facilitate the realization and usage of such diverse interaction modalities. The third part is dedicated to the exploration of perceptual phenomena in VR. In contrast to locomotion and interaction, our contributions in the field of perception emphasize the strong points of immersive setups. We utilize VR to transfer the illusion of virtual body ownership to nonhumanoid avatars and exploit this phenomenon for novel gaming experiences with animals in the leading role. As one of our contributions, we demonstrate how to repurpose the dichoptic presentation capability of immersive setups for preattentive zero-overhead highlighting in visualizations. We round off the dissertation by coming back to VR research in general, providing a critical assessment of our contributions and sharing our lessons learned along the way.

Read more
Human Computer Interaction

How to evaluate data visualizations across different levels of understanding

Understanding a visualization is a multi-level process. A reader must extract and extrapolate from numeric facts, understand how those facts apply to both the context of the data and other potential contexts, and draw or evaluate conclusions from the data. A well-designed visualization should support each of these levels of understanding. We diagnose levels of understanding of visualized data by adapting Bloom's taxonomy, a common framework from the education literature. We describe each level of the framework and provide examples for how it can be applied to evaluate the efficacy of data visualizations along six levels of knowledge acquisition - knowledge, comprehension, application, analysis, synthesis, and evaluation. We present three case studies showing that this framework expands on existing methods to comprehensively measure how a visualization design facilitates a viewer's understanding of visualizations. Although Bloom's original taxonomy suggests a strong hierarchical structure for some domains, we found few examples of dependent relationships between performance at different levels for our three case studies. If this level-independence holds across new tested visualizations, the taxonomy could serve to inspire more targeted evaluations of levels of understanding that are relevant to a communication goal.

Read more
Human Computer Interaction

HumanACGAN: conditional generative adversarial network with human-based auxiliary classifier and its evaluation in phoneme perception

We propose a conditional generative adversarial network (GAN) incorporating humans' perceptual evaluations. A deep neural network (DNN)-based generator of a GAN can represent a real-data distribution accurately but can never represent a human-acceptable distribution, which are ranges of data in which humans accept the naturalness regardless of whether the data are real or not. A HumanGAN was proposed to model the human-acceptable distribution. A DNN-based generator is trained using a human-based discriminator, i.e., humans' perceptual evaluations, instead of the GAN's DNN-based discriminator. However, the HumanGAN cannot represent conditional distributions. This paper proposes the HumanACGAN, a theoretical extension of the HumanGAN, to deal with conditional human-acceptable distributions. Our HumanACGAN trains a DNN-based conditional generator by regarding humans as not only a discriminator but also an auxiliary classifier. The generator is trained by deceiving the human-based discriminator that scores the unconditioned naturalness and the human-based classifier that scores the class-conditioned perceptual acceptability. The training can be executed using the backpropagation algorithm involving humans' perceptual evaluations. Our experimental results in phoneme perception demonstrate that our HumanACGAN can successfully train this conditional generator.

Read more
Human Computer Interaction

Humans learn too: Better Human-AI Interaction using Optimized Human Inputs

Humans rely more and more on systems with AI components. The AI community typically treats human inputs as a given and optimizes AI models only. This thinking is one-sided and it neglects the fact that humans can learn, too. In this work, human inputs are optimized for better interaction with an AI model while keeping the model fixed. The optimized inputs are accompanied by instructions on how to create them. They allow humans to save time and cut on errors, while keeping required changes to original inputs limited. We propose continuous and discrete optimization methods modifying samples in an iterative fashion. Our quantitative and qualitative evaluation including a human study on different hand-generated inputs shows that the generated proposals lead to lower error rates, require less effort to create and differ only modestly from the original samples.

Read more
Human Computer Interaction

Humans-as-a-sensor for buildings: Intensive longitudinal indoor comfort models

Evaluating and optimising human comfort within the built environment is challenging due to the large number of physiological, psychological and environmental variables that affect occupant comfort preference. Human perception could be helpful to capture these disparate phenomena and interpreting their impact; the challenge is collecting spatially and temporally diverse subjective feedback in a scalable way. This paper presents a methodology to collect intensive longitudinal subjective feedback of comfort-based preference using micro ecological momentary assessments on a smartwatch platform. An experiment with 30 occupants over two weeks produced 4,378 field-based surveys for thermal, noise, and acoustic preference. The occupants and the spaces in which they left feedback were then clustered according to these preference tendencies. These groups were used to create different feature sets with combinations of environmental and physiological variables, for use in a multi-class classification task. These classification models were trained on a feature set that was developed from time-series attributes, environmental and near-body sensors, heart rate, and the historical preferences of both the individual and the comfort group assigned. The most accurate model had multi-class classification F1 micro scores of 64%, 80% and 86% for thermal, light, and noise preference, respectively. The discussion outlines how these models can enhance comfort preference prediction when supplementing data from installed sensors. The approach presented prompts reflection on how the building analysis community evaluates, controls, and designs indoor environments through balancing the measurement of variables with strategically asking for occupant preferences in an intensive longitudinal way.

Read more
Human Computer Interaction

HyperTendril: Visual Analytics for User-Driven Hyperparameter Optimization of Deep Neural Networks

To mitigate the pain of manually tuning hyperparameters of deep neural networks, automated machine learning (AutoML) methods have been developed to search for an optimal set of hyperparameters in large combinatorial search spaces. However, the search results of AutoML methods significantly depend on initial configurations, making it a non-trivial task to find a proper configuration. Therefore, human intervention via a visual analytic approach bears huge potential in this task. In response, we propose HyperTendril, a web-based visual analytics system that supports user-driven hyperparameter tuning processes in a model-agnostic environment. HyperTendril takes a novel approach to effectively steering hyperparameter optimization through an iterative, interactive tuning procedure that allows users to refine the search spaces and the configuration of the AutoML method based on their own insights from given results. Using HyperTendril, users can obtain insights into the complex behaviors of various hyperparameter search algorithms and diagnose their configurations. In addition, HyperTendril supports variable importance analysis to help the users refine their search spaces based on the analysis of relative importance of different hyperparameters and their interaction effects. We present the evaluation demonstrating how HyperTendril helps users steer their tuning processes via a longitudinal user study based on the analysis of interaction logs and in-depth interviews while we deploy our system in a professional industrial environment.

Read more
Human Computer Interaction

I Want My App That Way: Reclaiming Sovereignty Over Personal Devices

Dark patterns in mobile apps take advantage of cognitive biases of end-users and can have detrimental effects on people's lives. Despite growing research in identifying remedies for dark patterns and established solutions for desktop browsers, there exists no established methodology to reduce dark patterns in mobile apps. Our work introduces GreaseDroid, a community-driven app modification framework enabling non-expert users to disable dark patterns in apps selectively.

Read more
Human Computer Interaction

Identifying Usability Issues of Software Analytics Applications in Immersive Augmented Reality

Software analytics in augmented reality (AR) is said to have great potential. One reason why this potential is not yet fully exploited may be usability problems of the AR user interfaces. We present an iterative and qualitative usability evaluation with 15 subjects of a state-of-the-art application for software analytics in AR. We could identify and resolve numerous usability issues. Most of them were caused by applying conventional user interface elements, such as dialog windows, buttons, and scrollbars. The used city visualization, however, did not cause any usability issues. Therefore, we argue that future work should focus on making conventional user interface elements in AR obsolete by integrating their functionality into the immersive visualization.

Read more

Ready to get started?

Join us today