Featured Researches

Human Computer Interaction

GUIGAN: Learning to Generate GUI Designs Using Generative Adversarial Networks

Graphical User Interface (GUI) is ubiquitous in almost all modern desktop software, mobile applications, and online websites. A good GUI design is crucial to the success of the software in the market, but designing a good GUI which requires much innovation and creativity is difficult even to well-trained designers. Besides, the requirement of the rapid development of GUI design also aggravates designers' working load. So, the availability of various automated generated GUIs can help enhance the design personalization and specialization as they can cater to the taste of different designers. To assist designers, we develop a model GUIGAN to automatically generate GUI designs. Different from conventional image generation models based on image pixels, our GUIGAN is to reuse GUI components collected from existing mobile app GUIs for composing a new design that is similar to natural-language generation. Our GUIGAN is based on SeqGAN by modeling the GUI component style compatibility and GUI structure. The evaluation demonstrates that our model significantly outperforms the best of the baseline methods by 30.77% in Frechet Inception distance (FID) and 12.35% in 1-Nearest Neighbor Accuracy (1-NNA). Through a pilot user study, we provide initial evidence of the usefulness of our approach for generating acceptable brand new GUI designs.

Read more
Human Computer Interaction

GazeBase: A Large-Scale, Multi-Stimulus, Longitudinal Eye Movement Dataset

This manuscript presents GazeBase, a large-scale longitudinal dataset containing 12,334 monocular eye-movement recordings captured from 322 college-aged subjects. Subjects completed a battery of seven tasks in two contiguous sessions during each round of recording, including a - 1) fixation task, 2) horizontal saccade task, 3) random oblique saccade task, 4) reading task, 5/6) free viewing of cinematic video task, and 7) gaze-driven gaming task. A total of nine rounds of recording were conducted over a 37 month period, with subjects in each subsequent round recruited exclusively from the prior round. All data was collected using an EyeLink 1000 eye tracker at a 1,000 Hz sampling rate, with a calibration and validation protocol performed before each task to ensure data quality. Due to its large number of subjects and longitudinal nature, GazeBase is well suited for exploring research hypotheses in eye movement biometrics, along with other emerging applications applying machine learning techniques to eye movement signal analysis.

Read more
Human Computer Interaction

Gemini: A Grammar and Recommender System for AnimatedTransitions in Statistical Graphics

Animated transitions help viewers follow changes between related visualizations. Specifying effective animations demands significant effort: authors must select the elements and properties to animate, provide transition parameters, and coordinate the timing of stages. To facilitate this process, we present Gemini, a declarative grammar and recommendation system for animated transitions between single-view statistical graphics. Gemini specifications define transition "steps" in terms of high-level visual components (marks, axes, legends) and composition rules to synchronize and concatenate steps. With this grammar, Gemini can recommend animation designs to augment and accelerate designers' work. Gemini enumerates staged animation designs for given start and end states, and ranks those designs using a cost function informed by prior perceptual studies. To evaluate Gemini, we conduct both a formative study on Mechanical Turk to assess and tune our ranking function, and a summative study in which 8 experienced visualization developers implement animations in D3 that we then compare to Gemini's suggestions. We find that most designs (9/11) are exactly replicable in Gemini, with many (8/11) achievable via edits to suggestions, and that Gemini suggestions avoid multiple participant errors.

Read more
Human Computer Interaction

Geo-Spatial Data Visualization and Critical Metrics Predictions for Canadian Elections

Open data published by various organizations is intended to make the data available to the public. All over the world, numerous organizations maintain a considerable number of open databases containing a lot of facts and numbers. However, most of them do not offer a concise and insightful data interpretation or visualization tool, which can help users to process all of the information in a consistently comparable way. Canadian Federal and Provincial Elections is an example of these databases. This information exists in numerous websites, as separate tables so that the user needs to traverse through a tree structure of scattered information on the site, and the user is left with the comparison, without providing proper tools, data-interpretation or visualizations. In this paper, we provide technical details of addressing this problem, by using the Canadian Elections data (since 1867) as a specific case study as it has numerous technical challenges. We hope that the methodology used here can help in developing similar tools to achieve some of the goals of publicly available datasets. The developed tool contains data visualization, trend analysis, and prediction components. The visualization enables the users to interact with the data through various techniques, including Geospatial visualization. To reproduce the results, we have open-sourced the tool.

Read more
Human Computer Interaction

GlassViz: Visualizing Automatically-Extracted Entry Points for Exploring Scientific Corpora in Problem-Driven Visualization Research

In this paper, we report the development of a model and a proof-of-concept visual text analytics (VTA) tool to enhance documentdiscovery in a problem-driven visualization research (PDVR) con-text. The proposed model captures the cognitive model followed bydomain and visualization experts by analyzing the interdisciplinarycommunication channel as represented by keywords found in twodisjoint collections of research papers. High distributional inter-collection similarities are employed to build informative keywordassociations that serve as entry points to drive the exploration of alarge document corpus. Our approach is demonstrated in the contextof research on visualization for the digital humanities.

Read more
Human Computer Interaction

Good for the Many or Best for the Few? A Dilemma in the Design of Algorithmic Advice

Applications in a range of domains, including route planning and well-being, offer advice based on the social information available in prior users' aggregated activity. When designing these applications, is it better to offer: a) advice that if strictly adhered to is more likely to result in an individual successfully achieving their goal, even if fewer users will choose to adopt it? or b) advice that is likely to be adopted by a larger number of users, but which is sub-optimal with regard to any particular individual achieving their goal? We identify this dilemma, characterized as Goal-Directed vs. Adoption-Directed advice, and investigate the design questions it raises through an online experiment undertaken in four advice domains (financial investment, making healthier lifestyle choices, route planning, training for a 5k run), with three user types, and across two levels of uncertainty. We report findings that suggest a preference for advice favoring individual goal attainment over higher user adoption rates, albeit with significant variation across advice domains; and discuss their design implications.

Read more
Human Computer Interaction

GraphFederator: Federated Visual Analysis for Multi-party Graphs

This paper presents GraphFederator, a novel approach to construct joint representations of multi-party graphs and supports privacy-preserving visual analysis of graphs. Inspired by the concept of federated learning, we reformulate the analysis of multi-party graphs into a decentralization process. The new federation framework consists of a shared module that is responsible for joint modeling and analysis, and a set of local modules that run on respective graph data. Specifically, we propose a federated graph representation model (FGRM) that is learned from encrypted characteristics of multi-party graphs in local modules. We also design multiple visualization views for joint visualization, exploration, and analysis of multi-party graphs. Experimental results with two datasets demonstrate the effectiveness of our approach.

Read more
Human Computer Interaction

Guidelines for the Development of Immersive Virtual Reality Software for Cognitive Neuroscience and Neuropsychology: The Development of Virtual Reality Everyday Assessment Lab (VR-EAL)

Virtual reality (VR) head-mounted displays (HMD) appear to be effective research tools, which may address the problem of ecological validity in neuropsychological testing. However, their widespread implementation is hindered by VR induced symptoms and effects (VRISE) and the lack of skills in VR software development. This study offers guidelines for the development of VR software in cognitive neuroscience and neuropsychology, by describing and discussing the stages of the development of Virtual Reality Everyday Assessment Lab (VR-EAL), the first neuropsychological battery in immersive VR. Techniques for evaluating cognitive functions within a realistic storyline are discussed. The utility of various assets in Unity, software development kits, and other software are described so that cognitive scientists can overcome challenges pertinent to VRISE and the quality of the VR software. In addition, this pilot study attempts to evaluate VR-EAL in accordance with the necessary criteria for VR software for research purposes. The VR neuroscience questionnaire (VRNQ; Kourtesis et al., 2019b) was implemented to appraise the quality of the three versions of VR-EAL in terms of user experience, game mechanics, in-game assistance, and VRISE. Twenty-five participants aged between 20 and 45 years with 12-16 years of full-time education evaluated various versions of VR-EAL. The final version of VR-EAL achieved high scores in every sub-score of the VRNQ and exceeded its parsimonious cut-offs. It also appeared to have better in-game assistance and game mechanics, while its improved graphics substantially increased the quality of the user experience and almost eradicated VRISE. The results substantially support the feasibility of the development of effective VR research and clinical software without the presence of VRISE during a 60-minute VR session.

Read more
Human Computer Interaction

HEMVIP: Human Evaluation of Multiple Videos in Parallel

In many research areas, for example motion and gesture generation, objective measures alone do not provide an accurate impression of key stimulus traits such as perceived quality or appropriateness. The gold standard is instead to evaluate these aspects through user studies, especially subjective evaluations of video stimuli. Common evaluation paradigms either present individual stimuli to be scored on Likert-type scales, or ask users to compare and rate videos in a pairwise fashion. However, the time and resources required for such evaluations scale poorly as the number of conditions to be compared increases. Building on standards used for evaluating the quality of multimedia codecs, this paper instead introduces a framework for granular rating of multiple comparable videos in parallel. This methodology essentially analyses all condition pairs at once. Our contributions are 1) a proposed framework, called HEMVIP, for parallel and granular evaluation of multiple video stimuli and 2) a validation study confirming that results obtained using the tool are in close agreement with results of prior studies using conventional multiple pairwise comparisons.

Read more
Human Computer Interaction

HOPES -- An Integrative Digital Phenotyping Platform for Data Collection, Monitoring and Machine Learning

We describe the development of, and early experiences with, comprehensive Digital Phenotyping platform: Health Outcomes through Positive Engagement and Self-Empowerment (HOPES). HOPES is based on the open-source Beiwe platform but adds a much wider range of data collection, including the integration of wearable data sources and further sensor collection from the smartphone. Requirements were in part derived from a concurrent clinical trial for schizophrenia. This trial required development of significant capabilities in HOPES in security, privacy, ease-of-use and scalability, based on a careful combination of public cloud and on-premises operation. We describe new data pipelines to clean, process, present and analyze data. This includes a set of dashboards customized to the needs of the research study operations and for clinical care. A test use of HOPES is described by analyzing the digital behaviors of 20 participants during the SARS-CoV-2 pandemic.

Read more

Ready to get started?

Join us today