Featured Researches

Human Computer Interaction

Encodable: Configurable Grammar for Visualization Components

There are so many libraries of visualization components nowadays with their APIs often different from one another. Could these components be more similar, both in terms of the APIs and common functionalities? For someone who is developing a new visualization component, how should the API look like? This work drew inspiration from visualization grammar, decoupled the grammar from its rendering engine and adapted it into a configurable grammar for individual components called Encodable. Encodable helps component authors define grammar for their components, and parse encoding specifications from users into utility functions for the implementation. This paper explains the grammar design and demonstrates how to build components with it.

Read more
Human Computer Interaction

Enhancing autonomy transparency: an option-centric rationale approach

While the advances in artificial intelligence and machine learning empower a new generation of autonomous systems for assisting human performance, one major concern arises from the human factors perspective: Humans have difficulty deciphering autonomy-generated solutions and increasingly perceive autonomy as a mysterious black box. The lack of transparency contributes to the lack of trust in autonomy and sub-optimal team performance. To enhance autonomy transparency, this study proposed an option-centric rationale display and evaluated its effectiveness. We developed a game Treasure Hunter wherein a human uncovers a map for treasures with the help from an intelligent assistant, and conducted a human-in-the-loop experiment with 34 participants. Results indicated that by conveying the intelligent assistant's decision-making rationale via the option-centric rationale display, participants had higher trust in the system and calibrated their trust faster. Additionally, higher trust led to higher acceptance of recommendations from the intelligent assistant, and in turn higher task performance.

Read more
Human Computer Interaction

EpiMob: Interactive Visual Analytics of Citywide Human Mobility Restrictions for Epidemic Control

The outbreak of coronavirus disease (COVID-19) has swept across more than 180 countries and territories since late January 2020. As a worldwide emergency response, governments have taken various measures and implemented policies, such as self-quarantine, travel restrictions, work from home, and regional lockdown, to control the rapid spread of this epidemic. The common intention of these countermeasures is to restrict human mobility because COVID-19 is a highly contagious disease that is spread by human-to-human transmission. Medical experts and policy makers have expressed the urgency of being able to effectively evaluate the effects of human restriction policies with the aid of big data and information technology. Thus, in this study, based on big human mobility data and city POI data, we designed an interactive visual analytics system named EpiMob (Epidemic Mobility). The system interactively simulates the changes in human mobility and the number of infected people in response to the implementation of a certain restriction policy or combination of policies (e.g., regional lockdown, telecommuting, screening). Users can conveniently designate the spatial and temporal ranges for different mobility restriction policies, and the result reflecting the infection situation under different policies is dynamically displayed and can be flexibly compared. We completed multiple case studies of the largest metropolitan area in Japan (i.e., Greater Tokyo Area) and conducted interviews with domain experts to demonstrate that our system can provide illustrative insight by measuring and comparing the effects of different human mobility restriction policies for epidemic control.

Read more
Human Computer Interaction

Ethical conceptual replication of visualization research considering sources of methodological bias and practical significance

General design principles for visualization have been relatively well-established based on a combination of cognitive and perceptual theory and empirical evaluations over the past 20 years. To determine how these principles hold up across use contexts and end-users, I argue that we should emphasize conceptual replication focused on determining practical significance and reducing methodological biases. This shift in thinking aims to determine how design principles interact with methodological approaches, laying the groundwork for visualization meta-science.

Read more
Human Computer Interaction

Evaluating User Experiences in Mixed Reality

Measure user experience in MR (i.e., AR/VR) user studies is essential. Researchers apply a wide range of measuring methods using objective (e.g., biosignals, time logging), behavioral (e.g., gaze direction, movement amplitude), and subjective (e.g., standardized questionnaires) metrics. Many of these measurement instruments were adapted from use-cases outside of MR but have not been validated for usage in MR experiments. However, researchers are faced with various challenges and design alternatives when measuring immersive experiences. These challenges become even more diverse when running out-of-the lab studies. Measurement methods of VR experience recently received much attention. For example, research has started embedding questionnaires in the VE for various applications, allowing users to stay closer to the ongoing experience while filling out the survey. However, there is a diversity in the interaction methods and practices on how the assessment procedure is conducted. This diversity in methods underlines a missing shared agreement of standardized measurement tools for VR experiences. AR research strongly orients on the research methods from VR, e.g., using the same type of subjective questionnaires. However, some crucial technical differences require careful considerations during the evaluation. This workshop at CHI 2021 provides a foundation to exchange expertise and address challenges and opportunities of research methods in MR user studies. By this, our workshop launches a discussion of research methods that should lead to standardizing assessment methods in MR user studies. The outcomes of the workshop will be aggregated into a collective special issue journal article.

Read more
Human Computer Interaction

EventAnchor: Reducing Human Interactions in Event Annotation of Racket Sports Videos

The popularity of racket sports (e.g., tennis and table tennis) leads to high demands for data analysis, such as notational analysis, on player performance. While sports videos offer many benefits for such analysis, retrieving accurate information from sports videos could be challenging. In this paper, we propose EventAnchor, a data analysis framework to facilitate interactive annotation of racket sports video with the support of computer vision algorithms. Our approach uses machine learning models in computer vision to help users acquire essential events from videos (e.g., serve, the ball bouncing on the court) and offers users a set of interactive tools for data annotation. An evaluation study on a table tennis annotation system built on this framework shows significant improvement of user performances in simple annotation tasks on objects of interest and complex annotation tasks requiring domain knowledge.

Read more
Human Computer Interaction

EvoK: Connecting loved ones through Heart Rate sharing

In this work, we present EvoK, a new way of sharing one's heart rate with feedback from their close contacts to alleviate social isolation and loneliness. EvoK consists of a pair of wearable prototype devices (i.e., sender and receiver). The sender is designed as a headband enabling continuous sensing of heart rate with aesthetic designs to maximize social acceptance. The receiver is designed as a wristwatch enabling unobtrusive receiving of the loved one's continuous heart rate with multi-modal notification systems.

Read more
Human Computer Interaction

Examining the Impact of Algorithm Awareness on Wikidata's Recommender System Recoin

The global infrastructure of the Web, designed as an open and transparent system, has a significant impact on our society. However, algorithmic systems of corporate entities that neglect those principles increasingly populated the Web. Typical representatives of these algorithmic systems are recommender systems that influence our society both on a scale of global politics and during mundane shopping decisions. Recently, such recommender systems have come under critique for how they may strengthen existing or even generate new kinds of biases. To this end, designers and engineers are increasingly urged to make the functioning and purpose of recommender systems more transparent. Our research relates to the discourse of algorithm awareness, that reconsiders the role of algorithm visibility in interface design. We conducted online experiments with 105 participants using MTurk for the recommender system Recoin, a gadget for Wikidata. In these experiments, we presented users with one of a set of three different designs of Recoin's user interface, each of them exhibiting a varying degree of explainability and interactivity. Our findings include a positive correlation between comprehension of and trust in an algorithmic system in our interactive redesign. However, our results are not conclusive yet, and suggest that the measures of comprehension, fairness, accuracy and trust are not yet exhaustive for the empirical study of algorithm awareness. Our qualitative insights provide a first indication for further measures. Our study participants, for example, were less concerned with the details of understanding an algorithmic calculation than with who or what is judging the result of the algorithm.

Read more
Human Computer Interaction

Expanding Explainability: Towards Social Transparency in AI systems

As AI-powered systems increasingly mediate consequential decision-making, their explainability is critical for end-users to take informed and accountable actions. Explanations in human-human interactions are socially-situated. AI systems are often socio-organizationally embedded. However, Explainable AI (XAI) approaches have been predominantly algorithm-centered. We take a developmental step towards socially-situated XAI by introducing and exploring Social Transparency (ST), a sociotechnically informed perspective that incorporates the socio-organizational context into explaining AI-mediated decision-making. To explore ST conceptually, we conducted interviews with 29 AI users and practitioners grounded in a speculative design scenario. We suggested constitutive design elements of ST and developed a conceptual framework to unpack ST's effect and implications at the technical, decision-making, and organizational level. The framework showcases how ST can potentially calibrate trust in AI, improve decision-making, facilitate organizational collective actions, and cultivate holistic explainability. Our work contributes to the discourse of Human-Centered XAI by expanding the design space of XAI.

Read more
Human Computer Interaction

Expectation Versus Reality: The Failed Evaluation of a Mixed-Initiative Visualization System

Our research aimed to present the design and evaluation of a mixed-initiative system that aids the user in handling complex datasets and dense visualization systems. We attempted to demonstrate this system with two trials of an online between-groups, two-by-two study, measuring the effects of this mixed-initiative system on user interactions and system usability. However, due to flaws in the interface design and the expectations that we put on users, we were unable to show that the adaptive system had an impact on user interactions or system usability. In this paper, we discuss the unexpected findings that we found from our "failed" experiments and examine how we can learn from our failures to improve further research.

Read more

Ready to get started?

Join us today