Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amy Hurst is active.

Publication


Featured researches published by Amy Hurst.


conference on computer supported cooperative work | 2002

Revisiting the visit:: understanding how technology can shape the museum visit

Rebecca E. Grinter; Paul M. Aoki; Margaret H. Szymanski; James D. Thornton; Allison Woodruff; Amy Hurst

This paper reports findings from a study of how a guidebook was used by pairs of visitors touring a historic house. We describe how the guidebook was incorporated into their visit in four ways: shared listening, independent use, following one another, and checking in on each other. We discuss how individual and groupware features were adopted in support of different visiting experiences, and illustrate how that adoption was influenced by social relationships, the nature of the current visit, and any museum visiting strategies that the couples had. Finally, we describe how the guidebook facilitated awareness between couples, and how awareness of non-guidebook users (strangers) influenced use.


human factors in computing systems | 2005

Breakaway: an ambient display designed to change human behavior

Nassim JafariNaimi; Jodi Forlizzi; Amy Hurst; John Zimmerman

We present Breakaway, an ambient display that encourages people, whose job requires them to sit for long periods of time, to take breaks more frequently. Breakaway uses the information from sensors placed on an office chair to communicate in a non-obtrusive manner how long the user has been sitting. Breakaway is a small sculpture placed on the desk. Its design is inspired by animation arts and theater, which rely heavily on body language to express emotions. Its shape and movement reflect the form of the human body; an upright position reflecting the bodys refreshed pose, and a slouching position reflecting the bodys pose after sitting for a long time. An initial evaluation shows a correlation between the movement of the sculpture and when participants took breaks, suggesting that ambient displays that make use of aesthetic and lifelike form might be promising for making positive changes in human behavior.


machine vision applications | 2003

The perceptive workbench: Computer-vision-based gesture tracking, object tracking, and 3D reconstruction for augmented desks

Thad Starner; Bastian Leibe; David Minnen; Tracy Westyn; Amy Hurst; Justin Weeks

Abstract. The Perceptive Workbench endeavors to create a spontaneous and unimpeded interface between the physical and virtual worlds. Its vision-based methods for interaction constitute an alternative to wired input devices and tethered tracking. Objects are recognized and tracked when placed on the display surface. By using multiple infrared light sources, the objects 3-D shape can be captured and inserted into the virtual interface. This ability permits spontaneity, since either preloaded objects or those objects selected at run-time by the user can become physical icons. Integrated into the same vision-based interface is the ability to identify 3-D hand position, pointing direction, and sweeping arm gestures. Such gestures can enhance selection, manipulation, and navigation tasks. The Perceptive Workbench has been used for a variety of applications, including augmented reality gaming and terrain navigation. This paper focuses on the techniques used in implementing the Perceptive Workbench and the systems performance.


designing interactive systems | 2004

Infotropism: living and robotic plants as interactive displays

David Holstius; John A. Kembel; Amy Hurst; Peng-Hui Wan; Jodi Forlizzi

Designers often borrow from the natural world to achieve pleasing, unobtrusive designs. We have extended this practice by combining living plants with sensors and lights in an interactive display, and by creating a robotic analogue that mimics phototropic behavior. In this paper, we document our design process and report the results of a 2-week field study. We put our living plant display, and its robotic counterpart, in a cafeteria between pairs of trash and recycling containers. Contributions of recyclables or trash triggered directional bursts of light that gradually induced the plant displays to lean toward the more active container. In interviews, people offered explanations for the displays and spoke of caring for the plants. A marginally significant increase in recycling behavior (p=.08) occurred at the display with living plants. Apparent increases also occurred at the robotic display and a unit with only lights. Our findings indicate value in exploring the use of living material and biomimetic forms in displays, and in using lightweight robotics to deliver simple rewards.


conference on computer supported cooperative work | 2008

Sotto Voce: Facilitating Social Learning in a Historic House

Margaret H. Szymanski; Paul M. Aoki; Rebecca E. Grinter; Amy Hurst; James D. Thornton; Allison Woodruff

This study examines visitors’ use of two different electronic guidebook prototypes, the second an iteration of the first, that were developed to support social interaction between companions as they tour a historic house. Three studies were conducted in which paired visitors’ social interactions were video- and audio-recorded for analysis. Using conversation analysis, the data from the use of prototype 1 and prototype 2 were compared. It was found that audio delivery methods were consequential to the ways in which visitors structurally organized their social activity. Further, the availability of structural opportunities for social interaction between visitors has implications for the ways in which the learning process occurs in museum settings.


intelligent user interfaces | 2010

Automatically identifying targets users interact with during real world tasks

Amy Hurst; Scott E. Hudson; Jennifer Mankoff

Information about the location and size of the targets that users interact with in real world settings can enable new innovations in human performance assessment and soft-ware usability analysis. Accessibility APIs provide some information about the size and location of targets. How-ever this information is incomplete because it does not sup-port all targets found in modern interfaces and the reported sizes can be inaccurate. These accessibility APIs access the size and location of targets through low-level hooks to the operating system or an application. We have developed an alternative solution for target identification that leverages visual affordances in the interface, and the visual cues produced as users interact with targets. We have used our novel target identification technique in a hybrid solution that combines machine learning, computer vision, and accessibility API data to find the size and location of targets users select with 89% accuracy. Our hybrid approach is superior to the performance of the accessibility API alone: in our dataset of 1355 targets covering 8 popular applications, only 74% of the targets were correctly identified by the API alone.


intelligent user interfaces | 2008

Automatically detecting pointing performance

Amy Hurst; Scott E. Hudson; Jennifer Mankoff; Shari Trewin

Since not all persons interact with computer systems in the same way, computer systems should not interact with all individuals in the same way. This paper presents a significant step in automatically detecting characteristics of persons with a wide range of abilities based on observing their user input events. Three datasets are used to build learned statistical models on pointing data collected in a laboratory setting from individuals with varying ability to use computer pointing devices. The first dataset is used to distinguish between pointing behaviors from individuals with pointing problems vs. individuals without with 92.7% accuracy. The second is used to distinguish between pointing data from Young Adults and Adults vs. Older Adults vs. individuals with Parkinsons Disease with 91.6% accuracy. The final data set is used to predict the need for a specific adaptation based on a users performance with 94.4% accuracy. These results suggest that it may be feasible to use such models to automatically identify computer users who would benefit from accessibility tools, and to even make specific tool recommendations.


ubiquitous computing | 2001

The Conversational Role of Electronic Guidebooks

Allison Woodruff; Margaret H. Szymanski; Paul M. Aoki; Amy Hurst

We describe an electronic guidebook prototype and report on a study of its use in a historic house. Visitors were given a choice of information delivery modes, and generally preferred audio played through speakers. In this delivery mode, visitors assigned the electronic guidebook a conversational role, e.g., it was granted turns in conversation, it introduced topics of conversation, and visitors responded to it verbally. We illustrate the integration of the guidebook into natural conversation by showing that discourse with the electronic guidebook followed the conversational structure of storytelling. We also demonstrate that visitors coordinated object choice and physical positioning to ensure that the electronic guidebooks played a role in their conversations. Because the visitors integrated the electronic guidebooks in their existing conversations with their companions, they achieved social interactions with each other that were more fulfilling than those that occur with other presentation methods such as traditional headphone audio tours.


conference on computers and accessibility | 2008

Understanding pointing problems in real world computing environments

Amy Hurst; Jennifer Mankoff; Scott E. Hudson

Understanding how pointing performance varies in real world computer use and over time can provide valuable insight about how systems should accommodate changes in pointing behavior. Unfortunately, pointing data from individuals with pointing problems is rarely studied during real world use. Instead, it is most frequently evaluated in a laboratory where it is easier to collect and evaluate data. We developed a technique to collect and analyze real world pointing performance which we used to investigate the variance in performance of six individuals with a range of pointing abilities. Features of pointing performance we analyzed include metrics such as movement trajectories, clicking, and double clicking. These individuals exhibited high variance during both supervised and unsupervised (or real world) computer use across multiple login sessions. The high variance found within each participant highlights the potential inaccuracy of judging performance based on a single laboratory session.


user interface software and technology | 2007

Dirty desktops: using a patina of magnetic mouse dust to make common interactor targets easier to select

Amy Hurst; Jennifer Mankoff; Anind K. Dey; Scott E. Hudson

A common task in graphical user interfaces is controlling onscreen elements using a pointer. Current adaptive pointing techniques require applications to be built using accessibility libraries that reveal information about interactive targets, and most do not handle path/menu navigation. We present a pseudo-haptic technique that is OS and application independent, and can handle both dragging and clicking. We do this by associating a small force with each past click or drag. When a user frequently clicks in the same general area (e.g., on a button), the patina of past clicks naturally creates a pseudo-haptic magnetic field with an effect similar to that ofsnapping or sticky icons. Our contribution is a bottom-up approach to make targets easier to select without requiring prior knowledge of them.

Collaboration


Dive into the Amy Hurst's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul M. Aoki

University of California

View shared research outputs
Top Co-Authors

Avatar

Jennifer Mankoff

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Scott E. Hudson

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jodi Forlizzi

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

John Zimmerman

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Rebecca E. Grinter

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bilge Mutlu

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge