Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robin Jeffries is active.

Publication


Featured researches published by Robin Jeffries.


Archive | 2004

Personalized Digital Television

John Karat; Jean Vanderdonckt; Gregory D. Abowd; Gaëlle Calvary; Gilbert Cockton; Mary Czerwinski; Steve Feiner; Elizabeth Furtado; Kristiana Höök; Robert J. K. Jacob; Robin Jeffries; Peter Johnson; Kumiyo Nakakoji; Philippe A. Palanque; Oscar Pastor; Fabio Paternò; Costin Pribeanu; Marilyn Salzman; Chris Salzman; Markus Stolze; Gerd Szwillus; Manfred Tscheligi; Gerrit C. van der Veer; Shumin Zhai; Liliana Ardissono; Alfred Kobsa; Mark T. Maybury

This chapter presents the recommendation techniques applied in Personal Program Guide (PPG). This is a system generating personalized Electronic Program Guides for Digital TV. The PPGmanages a user model that stores the estimates of the individual user’s preferences for TV program categories. This model results from the integration of di¡erent preference acquisitionmodules that handle explicit user preferences, stereotypical information about TV viewers, and information about the user’s viewing behavior. The observation of the individual viewing behavior is particularly easy because the PPG runs on the set-top box and is deeply integrated with the TV playing and the video recording services o¡ered by that type of device.


human factors in computing systems | 2009

Undo and erase events as indicators of usability problems

David Akers; Matthew Robert Simpson; Robin Jeffries; Terry Winograd

One approach to reducing the costs of usability testing is to facilitate the automatic detection of critical incidents: serious breakdowns in interaction that stand out during software use. This research evaluates the use of undo and erase events as indicators of critical incidents in Google SketchUp (a 3D-modeling application), measuring an indicators usefulness by the numbers and types of usability problems discovered. We compared problems identified using undo and erase events to problems identified using the user-reported critical incident technique [Hartson and Castillo 1998]. In a within-subjects experiment with 35 participants, undo and erase episodes together revealed over 90% of the problems rated as severe, several of which would not have been discovered by self-report alone. Moreover, problems found by all three methods were rated as significantly more severe than those identified by only a subset of methods. These results suggest that undo and erase events will serve as useful complements to user-reported critical incidents for low cost usability evaluation of creation-oriented applications like SketchUp.


Ways of Knowing in HCI | 2014

Understanding User Behavior Through Log Data and Analysis

Susan T. Dumais; Robin Jeffries; Daniel M. Russell; Diane Tang; Jaime Teevan

HCI researchers are increasingly collecting rich behavioral traces of user interactions with online systems in situ at a scale not previously possible. These logs can be used to characterize user interactions with existing systems and compare different designs. Large-scale log studies give rise to new challenges in experimental design, data collection and interpretation, and ethics. The chapter discusses how to address these challenges using search engine logs, but the methods are applicable to other types of log data.


ACM Transactions on Computer-Human Interaction | 2012

Backtracking Events as Indicators of Usability Problems in Creation-Oriented Applications

David Akers; Robin Jeffries; Matthew Robert Simpson; Terry Winograd

A diversity of user goals and strategies make creation-oriented applications such as word processors or photo-editors difficult to comprehensively test. Evaluating such applications requires testing a large pool of participants to capture the diversity of experience, but traditional usability testing can be prohibitively expensive. To address this problem, this article contributes a new usability evaluation method called backtracking analysis, designed to automate the process of detecting and characterizing usability problems in creation-oriented applications. The key insight is that interaction breakdowns in creation-oriented applications often manifest themselves in backtracking operations that can be automatically logged (e.g., undo and erase operations). Backtracking analysis synchronizes these events to contextual data such as screen capture video, helping the evaluator to characterize specific usability problems. The results from three experiments demonstrate that backtracking events can be effective indicators of usability problems in creation-oriented applications, and can yield a cost-effective alternative to traditional laboratory usability testing.


ubiquitous computing | 2010

Rhythms and plasticity: television temporality at home

Lilly Irani; Robin Jeffries; Andrea Knight

Digital technologies have enabled new temporalities of media consumption in the home. Through a field study of home television viewing practices, we investigated temporal orderings of television watching. In contrast to traditional pictures of television use, our evidence suggests that rhythms across households play an important role in shaping television watching. Further, we found a flexibility and openness within the patterns of television viewing that we refer to as “plasticity.” Our data suggest that plasticity and rhythms co-exist and together compose the qualitative experience of domestic television time; an understanding of both aspects of temporality suggests an approach for the design of future television technologies.


hawaii international conference on system sciences | 2009

Task behaviors during web search: The difficulty of assigning labels

Daniel M. Russell; Diane Tang; Melanie Kellar; Robin Jeffries

By examining searcher behavior on a large search engine, we have identified seven basic kinds of task behaviors that can be observed in web search session logs. In the studies reported, we first manually labeled 700 complete web sessions, and then subsequently had 23 searchers self-label 252 days of their own sessions to give an accurate picture of what kinds of tasks people are doing when they search. From these two studies, we have found that the most accurate labeling of search task session data is done by the searchers themselves, and that it is very difficult for an external observer or automatic classifier to infer where the task boundaries are or what the actual user task goal is.


human factors in computing systems | 2016

Science and Service, Innovation and Inspiration: Celebrating the Life of John Karat

Susan M. Dray; Clare-Marie Karat; John M. Carroll; Lorrie Faith Cranor; Robin Jeffries; Zhengjie Liu; Arnold Lund; Ben Shneiderman; Gerrit C. van der Veer

This panel will highlight and celebrate the life and work of John Karat, who passed away from pancreatic cancer last year. We will discuss his many contributions to the SIGCHI community, as well as the wider international community of people doing work in this area, focusing on both his scientific achievements and service contributions.


Journal of Usability Studies archive | 2007

Making usability recommendations useful and usable

Rolf Molich; Robin Jeffries; Joseph S. Dumas


Archive | 2008

Sensemaking for the rest of us

Daniel M. Russell; Robin Jeffries; Lilly Irani


human factors in computing systems | 2008

User experience at google: focus on the user and all else will follow

Irene Au; Richard Boardman; Robin Jeffries; Patrick Larvie; Antonella Pavese; Jens Riegelsberger; Kerry Rodden; Molly Stevens

Collaboration


Dive into the Robin Jeffries's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lilly Irani

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge