Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeff Huang is active.

Publication


Featured researches published by Jeff Huang.


Autonomous Agents and Multi-Agent Systems | 2016

Learning behaviors via human-delivered discrete feedback: modeling implicit feedback strategies to speed up learning

Robert Tyler Loftin; Bei Peng; James MacGlashan; Michael L. Littman; Matthew E. Taylor; Jeff Huang; David L. Roberts

For real-world applications, virtual agents must be able to learn new behaviors from non-technical users. Positive and negative feedback are an intuitive way to train new behaviors, and existing work has presented algorithms for learning from such feedback. That work, however, treats feedback as numeric reward to be maximized, and assumes that all trainers provide feedback in the same way. In this work, we show that users can provide feedback in many different ways, which we describe as “training strategies.” Specifically, users may not always give explicit feedback in response to an action, and may be more likely to provide explicit reward than explicit punishment, or vice versa, such that the lack of feedback itself conveys information about the behavior. We present a probabilistic model of trainer feedback that describes how a trainer chooses to provide explicit reward and/or explicit punishment and, based on this model, develop two novel learning algorithms (SABL and I-SABL) which take trainer strategy into account, and can therefore learn from cases where no feedback is provided. Through online user studies we demonstrate that these algorithms can learn with less feedback than algorithms based on a numerical interpretation of feedback. Furthermore, we conduct an empirical analysis of the training strategies employed by users, and of factors that can affect their choice of strategy.


IEEE Transactions on Visualization and Computer Graphics | 2015

Representing Uncertainty in Graph Edges: An Evaluation of Paired Visual Variables

Hua Guo; Jeff Huang; David H. Laidlaw

When visualizing data with uncertainty, a common approach is to treat uncertainty as an additional dimension and encode it using a visual variable. The effectiveness of this approach depends on how the visual variables chosen for representing uncertainty and other attributes interact to influence the users perception of each variable. We report a user study on the perception of graph edge attributes when uncertainty associated with each edge and the main edge attribute are visualized simultaneously using two separate visual variables. The study covers four visual variables that are commonly used for visualizing uncertainty on line graphical primitives: lightness, grain, fuzziness, and transparency. We select width, hue, and saturation for visualizing the main edge attribute and hypothesize that we can observe interference between the visual variable chosen to encode the main edge attribute and that to encode uncertainty, as suggested by the concept of dimensional integrality. Grouping the seven visual variables as color-based, focus-based, or geometry-based, we further hypothesize that the degree of interference is affected by the groups to which the two visual variables belong. We consider two further factors in the study: discriminability level for each visual variable as a factor intrinsic to the visual variables and graph-task type (visual search versus comparison) as a factor extrinsic to the visual variables. Our results show that the effectiveness of a visual variable in depicting uncertainty is strongly mediated by all the factors examined here. Focus-based visual variables (fuzziness, grain, and transparency) are robust to the choice of visual variables for encoding the main edge attribute, though fuzziness has stronger negative impact on the perception of width and transparency has stronger negative impact on the perception of hue than the other uncertainty visual variables. We found that interference between hue and lightness is much greater than that between saturation and lightness, though all three are color-based visual variables. We also found a compound relationship between discriminability level and the degree of dimensional integrality. We discuss the generalizability and limitation of the results and conclude with design considerations for visualizing graph uncertainty derived from these results, including recommended choices of visual variables when the relative importance of data attributes and graph tasks is known.


user interface software and technology | 2016

SleepCoacher: A Personalized Automated Self-Experimentation System for Sleep Recommendations

Nediyana Daskalova; Danaë Metaxa-Kakavouli; Adrienne Tran; Nicole R. Nugent; Julie Boergers; John E. McGeary; Jeff Huang

We present SleepCoacher, an integrated system implementing a framework for effective self-experiments. SleepCoacher automates the cycle of single-case experiments by collecting raw mobile sensor data and generating personalized, data-driven sleep recommendations based on a collection of template recommendations created with input from clinicians. The system guides users through iterative short experiments to test the effect of recommendations on their sleep. We evaluate SleepCoacher in two studies, measuring the effect of recommendations on the frequency of awakenings, self-reported restfulness, and sleep onset latency, concluding that it is effective: participant sleep improves as adherence with SleepCoachers recommendations and experiment schedule increases. This approach presents computationally-enhanced interventions leveraging the capacity of a closed feedback loop system, offering a method for scaling guided single-case experiments in real time.


Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies | 2017

Lessons Learned from Two Cohorts of Personal Informatics Self-Experiments

Nediyana Daskalova; Karthik Desingh; Alexandra Papoutsaki; Diane Schulze; Han Sha; Jeff Huang

Self-experiments allow people to investigate their own individual outcomes from behavior change, often with the aid of personal tracking devices. The challenge is to design scientifically valid self-experiments that can reach conclusive results. In this paper, we aim to understand how novices run self-experiments when they are provided with a structured lesson in experimental design. We conducted a study on self-experimentation with two cohorts of students, where a total of 34 students performed a self-experiment of their choice. In the first cohort, students were given only two restrictions: a specific number of variables to track and a set duration for the study. The findings from this cohort helped us generate concrete guidelines for running a self-experiment, and use them as the format for the next cohort. A second cohort of students used these guidelines to conduct their own self-experiments in a more structured manner. Based on the findings from both cohorts, we propose a set of guidelines for running successful self-experiments that address the pitfalls encountered by students in the study, such as inadequate study design and analysis methods. We also discuss broader implications for future self-experimenters and designers of tools for self-experimentation.


Information Processing and Management | 2015

Building a Better Mousetrap: Compressing Mouse Cursor Activity for Web Analytics

Luis A. Leiva; Jeff Huang

Abstract Websites can learn what their users do on their pages to provide better content and services to those users. A website can easily find out where a user has been, but in order to find out what content is consumed and how it was consumed at a sub-page level, prior work has proposed client-side tracking to record cursor activity, which is useful for computing the relevance for search results or determining user attention on a page. While recording cursor interactions can be done without disturbing the user, the overhead of recording the cursor trail and transmitting this data over the network can be substantial. In our work, we investigate methods to compress cursor data, taking advantage of the fact that not every cursor coordinate has equal value to the website developer. We evaluate 5 lossless and 5 lossy compression algorithms over two datasets, reporting results about client-side performance, space savings, and how well a lossy algorithm can replicate the original cursor trail. The results show that different compression techniques may be suitable for different goals: LZW offers reasonable lossless compression, but lossy algorithms such as piecewise linear interpolation and distance-thresholding offer better client-side performance and bandwidth reduction.


conference on human information interaction and retrieval | 2017

SearchGazer: Webcam Eye Tracking for Remote Studies of Web Search

Alexandra Papoutsaki; James Laskey; Jeff Huang

We introduce SearchGazer, a web-based eye tracker for remote web search studies using common webcams already present in laptops and some desktop computers. SearchGazer is a pure JavaScript library that infers the gaze behavior of searchers in real time. The eye tracking model self-calibrates by watching searchers interact with the search pages and trains a mapping of eye features to gaze locations and search page elements on the screen. Contrary to typical eye tracking studies in information retrieval, this approach does not require the purchase of any additional specialized equipment, and can be done remotely in a users natural environment, leading to cheaper and easier visual attention studies. While SearchGazer is not intended to be as accurate as specialized eye trackers, it is able to replicate many of the research findings of three seminal information retrieval papers: two that used eye tracking devices, and one that used the mouse cursor as a restricted focus viewer. Charts and heatmaps from those original papers are plotted side-by-side with SearchGazer results. While the main results are similar, there are some notable differences, which we hypothesize derive from improvements in the latest ranking technologies used by current versions of search engines and diligence by remote users. As part of this paper, we also release SearchGazer as a library that can be integrated into any search page.


automated software engineering | 2018

SEEDE: simultaneous execution and editing in a development environment

Steven P. Reiss; Qi Xin; Jeff Huang

We introduce a tool within the Code Bubbles development environment that allows for continuous execution as the programmer edits. The tool, SEEDE, shows both the intermediate and final results of execution in terms of variables, control and data flow, output, and graphics. These results are updated as the user edits. The tool can be used to help the user write new code or to find and fix bugs. The tool is explicitly designed to let the user quickly explore the execution of a method along with all the code it invokes, possibly while writing or modifying the code. The user can start continuous execution either at a breakpoint or for a test case. This paper describes the tool, its implementation, and its user interface. It presents an initial user study of the tool demonstrating its potential utility.


Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications | 2018

The eye of the typer: a benchmark and analysis of gaze behavior during typing

Alexandra Papoutsaki; Aaron Gokaslan; James Tompkin; Yuze He; Jeff Huang

We examine the relationship between eye gaze and typing, focusing on the differences between touch and non-touch typists. To enable typing-based research, we created a 51-participant benchmark dataset for user input across multiple tasks, including user input data, screen recordings, webcam video of the participants face, and eye tracking positions. There are patterns of eye movements that differ between the two types of typists, representing glances at the keyboard, which can be used to identify touch-.typed strokes with 92% accuracy. Then, we relate eye gaze with cursor activity, aligning both pointing and typing to eye gaze. One demonstrative application of the work is in extending WebGazer, a real-time web-browser-based webcam eye tracker. We show that incorporating typing behavior as a secondary signal improves eye tracking accuracy by 16% for touch typists, and 8% for non-touch typists.


international joint conference on artificial intelligence | 2016

Webgazer: scalable webcam eye tracking using user interactions

Alexandra Papoutsaki; Patsorn Sangkloy; James Laskey; Nediyana Daskalova; Jeff Huang; James Hays


national conference on artificial intelligence | 2014

A strategy-aware technique for learning behaviors from discrete human feedback

Robert Tyler Loftin; James MacGlashan; Bei Peng; Matthew E. Taylor; Michael L. Littman; Jeff Huang; David L. Roberts

Collaboration


Dive into the Jeff Huang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bei Peng

Washington State University

View shared research outputs
Top Co-Authors

Avatar

David L. Roberts

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew E. Taylor

Washington State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert Tyler Loftin

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge