Featured Researches

Human Computer Interaction

Comparative Layouts Revisited: Design Space, Guidelines, and Future Directions

We present a systematic review on three comparative layouts (i.e., juxtaposition, superposition, and explicit-encoding) which are information visualization (InfoVis) layouts designed to support comparison tasks. For the last decade, these layouts have served as fundamental idioms in designing many visualization systems. However, we found that the layouts have been used with inconsistent terms and confusion, and the lessons from previous studies are fragmented. The goal of our research is to distill the results from previous studies into a consistent and reusable framework. We review 127 research papers, including 15 papers with quantitative user studies, which employed comparative layouts. We first alleviate the ambiguous boundaries in the design space of comparative layouts by suggesting lucid terminology (e.g., chart-wise and item-wise juxtaposition). We then identify the diverse aspects of comparative layouts, such as the advantages and concerns of using each layout in the real-world scenarios and researchers' approaches to overcome the concerns. Building our knowledge on top of the initial insights gained from the Gleicher et al.'s survey, we elaborate on relevant empirical evidence that we distilled from our survey (e.g., the actual effectiveness of the layouts in different study settings) and identify novel facets that the original work did not cover (e.g., the familiarity of the layouts to people). Finally, we show the consistent and contradictory results on the performance of comparative layouts and offer practical implications for using the layouts by suggesting trade-offs and seven actionable guidelines.

Read more
Human Computer Interaction

Comparing Completion Time, Accuracy, and Satisfaction in Virtual Reality vs. Desktop Implementation of the Common Coordinate Framework Registration User Interface (CCF RUI)

Working with organs and tissue blocks is an essential task in medical environments. In order to prepare specimens for further analysis, wet-bench workers must dissect tissue and collect spatial metadata. The Registration User Interface (RUI) was developed to allow stakeholders in the Human Biomolecular Atlas Program (HuBMAP) to register tissue blocks by size, position, and orientation. The RUI has been used by tissue mapping centers across the HuBMAP consortium to register a total of 45 kidney, spleen, and colon tissue blocks. In this paper, we compare three setups for registering one 3D tissue block object to another 3D reference organ (target) object. The first setup is a 2D Desktop implementation featuring a traditional screen, mouse, and keyboard interface. The remaining setups are both virtual reality (VR) versions of the RUI: VR Tabletop, where users sit at a physical desk; VR Standup, where users stand upright. We ran a user study involving 42 human subjects completing 14 increasingly difficult and then 30 identical tasks and report position accuracy, rotation accuracy, completion time, and satisfaction. We found that while VR Tabletop and VR Standup users are about three times as fast and about a third more accurate in terms of rotation than 2D Desktop users, there are no significant differences for position accuracy. The performance values for the 2D Desktop version (22.6 seconds per task, 5.9 degrees rotation, and 1.32 mm position accuracy) confirm that the 2D Desktop interface is well-suited for registering tissue blocks at a speed and accuracy that meets the needs of experts performing tissue dissection. In addition, the 2D Desktop setup is cheaper, easier to learn, and more practical for wet-bench environments than the VR setups. All three setups were implemented using the Unity game engine, and study materials were made available, alongside videos documenting our setups.

Read more
Human Computer Interaction

Comparing State-of-the-Art and Emerging Augmented Reality Interfaces for Autonomous Vehicle-to-Pedestrian Communication

Providing pedestrians and other vulnerable road users with a clear indication about a fully autonomous vehicle status and intentions is crucial to make them coexist. In the last few years, a variety of external interfaces have been proposed, leveraging different paradigms and technologies including vehicle-mounted devices (like LED panels), short-range on-road projections, and road infrastructure interfaces (e.g., special asphalts with embedded displays). These designs were experimented in different settings, using mockups, specially prepared vehicles, or virtual environments, with heterogeneous evaluation metrics. Promising interfaces based on Augmented Reality (AR) have been proposed too, but their usability and effectiveness have not been tested yet. This paper aims to complement such body of literature by presenting a comparison of state-of-the-art interfaces and new designs under common conditions. To this aim, an immersive Virtual Reality-based simulation was developed, recreating a well-known scenario represented by pedestrians crossing in urban environments under non-regulated conditions. A user study was then performed to investigate the various dimensions of vehicle-to-pedestrian interaction leveraging objective and subjective metrics. Even though no interface clearly stood out over all the considered dimensions, one of the AR designs achieved state-of-the-art results in terms of safety and trust, at the cost of higher cognitive effort and lower intuitiveness compared to LED panels showing anthropomorphic features. Together with rankings on the various dimensions, indications about advantages and drawbacks of the various alternatives that emerged from this study could provide important information for next developments in the field.

Read more
Human Computer Interaction

Competing Models: Inferring Exploration Patterns and Information Relevance via Bayesian Model Selection

Analyzing interaction data provides an opportunity to learn about users, uncover their underlying goals, and create intelligent visualization systems. The first step for intelligent response in visualizations is to enable computers to infer user goals and strategies through observing their interactions with a system. Researchers have proposed multiple techniques to model users, however, their frameworks often depend on the visualization design, interaction space, and dataset. Due to these dependencies, many techniques do not provide a general algorithmic solution to user exploration modeling. In this paper, we construct a series of models based on the dataset and pose user exploration modeling as a Bayesian model selection problem where we maintain a belief over numerous competing models that could explain user interactions. Each of these competing models represent an exploration strategy the user could adopt during a session. The goal of our technique is to make high-level and in-depth inferences about the user by observing their low-level interactions. Although our proposed idea is applicable to various probabilistic model spaces, we demonstrate a specific instance of encoding exploration patterns as competing models to infer information relevance. We validate our technique's ability to infer exploration bias, predict future interactions, and summarize an analytic session using user study datasets. Our results indicate that depending on the application, our method outperforms established baselines for bias detection and future interaction prediction. Finally, we discuss future research directions based on our proposed modeling paradigm and suggest how practitioners can use this method to build intelligent visualization systems that understand users' goals and adapt to improve the exploration process.

Read more
Human Computer Interaction

Computational Workflows for Designing Input Devices

Input devices, such as buttons and sliders, are the foundation of any interface. The typical user-centered design workflow requires the developers and users to go through many iterations of design, implementation, and analysis. The procedure is inefficient, and human decisions highly bias the results. While computational methods are used to assist various design tasks, there has not been any holistic approach to automate the design of input components. My thesis proposed a series of Computational Input Design workflows: I envision a sample-efficient multi-objective optimization algorithm that cleverly selects design instances, which are instantly deployed on physical simulators. A meta-reinforcement learning user model then simulates the user behaviors when using the design instance upon the simulators. The new workflows derive Pareto-optimal designs with high efficiency and automation. I demonstrate designing a push-button via the proposed methods. The resulting designs outperform the known baselines. The Computational Input Design process can be generalized to other devices, such as joystick, touchscreen, mouse, controller, etc.

Read more
Human Computer Interaction

Computing Touch-Point Ambiguity on Mobile Touchscreens for Modeling Target Selection Times

Finger-Fitts law (FFitts law) is a model to predict touch-pointing times modified from Fitts' law. It considers the absolute touch-point precision, or a finger tremor factor sigma_a, to decrease the admissible target area and thus increase the task difficulty. Among choices such as running an independent task or performing parameter optimization, there is no consensus on the best methodology to measure sigma_a. This inconsistency could be harmful to HCI studies such as evaluating pointing techniques and comparing user groups. By integrating the results of our 1D and 2D touch-pointing experiments and reanalyses of previous studies' data, we examined the advantages and disadvantages of each approach to compute sigma_a, and we found that using the parameter optimization method has overall the best prediction performance.

Read more
Human Computer Interaction

Conceptual Metaphors Impact Perceptions of Human-AI Collaboration

With the emergence of conversational artificial intelligence (AI) agents, it is important to understand the mechanisms that influence users' experiences of these agents. We study a common tool in the designer's toolkit: conceptual metaphors. Metaphors can present an agent as akin to a wry teenager, a toddler, or an experienced butler. How might a choice of metaphor influence our experience of the AI agent? Sampling metaphors along the dimensions of warmth and competence---defined by psychological theories as the primary axes of variation for human social perception---we perform a study (N=260) where we manipulate the metaphor, but not the behavior, of a Wizard-of-Oz conversational agent. Following the experience, participants are surveyed about their intention to use the agent, their desire to cooperate with the agent, and the agent's usability. Contrary to the current tendency of designers to use high competence metaphors to describe AI products, we find that metaphors that signal low competence lead to better evaluations of the agent than metaphors that signal high competence. This effect persists despite both high and low competence agents featuring human-level performance and the wizards being blind to condition. A second study confirms that intention to adopt decreases rapidly as competence projected by the metaphor increases. In a third study, we assess effects of metaphor choices on potential users' desire to try out the system and find that users are drawn to systems that project higher competence and warmth. These results suggest that projecting competence may help attract new users, but those users may discard the agent unless it can quickly correct with a lower competence metaphor. We close with a retrospective analysis that finds similar patterns between metaphors and user attitudes towards past conversational agents such as Xiaoice, Replika, Woebot, Mitsuku, and Tay.

Read more
Human Computer Interaction

Confidence-Aware Learning Assistant

Not only correctness but also self-confidence play an important role in improving the quality of knowledge. Undesirable situations such as confident incorrect and unconfident correct knowledge prevent learners from revising their knowledge because it is not always easy for them to perceive the situations. To solve this problem, we propose a system that estimates self-confidence while solving multiple-choice questions by eye tracking and gives feedback about which question should be reviewed carefully. We report the results of three studies measuring its effectiveness. (1) On a well-controlled dataset with 10 participants, our approach detected confidence and unconfidence with 81% and 79% average precision. (2) With the help of 20 participants, we observed that correct answer rates of questions were increased by 14% and 17% by giving feedback about correct answers without confidence and incorrect answers with confidence, respectively. (3) We conducted a large-scale data recording in a private school (72 high school students solved 14,302 questions) to investigate effective features and the number of required training samples.

Read more
Human Computer Interaction

Context-Dependent Implicit Authentication for Wearable Device User

As market wearables are becoming popular with a range of services, including making financial transactions, accessing cars, etc. that they provide based on various private information of a user, security of this information is becoming very important. However, users are often flooded with PINs and passwords in this internet of things (IoT) world. Additionally, hard-biometric, such as facial or finger recognition, based authentications are not adaptable for market wearables due to their limited sensing and computation capabilities. Therefore, it is a time demand to develop a burden-free implicit authentication mechanism for wearables using the less-informative soft-biometric data that are easily obtainable from the market wearables. In this work, we present a context-dependent soft-biometric-based wearable authentication system utilizing the heart rate, gait, and breathing audio signals. From our detailed analysis, we find that a binary support vector machine (SVM) with radial basis function (RBF) kernel can achieve an average accuracy of 0.94±0.07 , F 1 score of 0.93±0.08 , an equal error rate (EER) of about 0.06 at a lower confidence threshold of 0.52, which shows the promise of this work.

Read more
Human Computer Interaction

Context-Responsive Labeling in Augmented Reality

Route planning and navigation are common tasks that often require additional information on points of interest. Augmented Reality (AR) enables mobile users to utilize text labels, in order to provide a composite view associated with additional information in a real-world environment. Nonetheless, displaying all labels for points of interest on a mobile device will lead to unwanted overlaps between information, and thus a context-responsive strategy to properly arrange labels is expected. The technique should remove overlaps, show the right level-of-detail, and maintain label coherence. This is necessary as the viewing angle in an AR system may change rapidly due to users' behaviors. Coherence plays an essential role in retaining user experience and knowledge, as well as avoiding motion sickness. In this paper, we develop an approach that systematically manages label visibility and levels-of-detail, as well as eliminates unexpected incoherent movement. We introduce three label management strategies, including (1) occlusion management, (2) level-of-detail management, and (3) coherence management by balancing the usage of the mobile phone screen. A greedy approach is developed for fast occlusion handling in AR. A level-of-detail scheme is adopted to arrange various types of labels. A 3D scene manipulation is then built to simultaneously suppress the incoherent behaviors induced by viewing angle changes. Finally, we present the feasibility and applicability of our approach through one synthetic and two real-world scenarios, followed by a qualitative user study.

Read more

Ready to get started?

Join us today