Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert St. Amant is active.

Publication


Featured researches published by Robert St. Amant.


ACM Transactions on Computer-Human Interaction | 2007

Model-based evaluation of expert cell phone menu interaction

Robert St. Amant; Thomas E. Horton; Frank E. Ritter

We describe concepts to support the analysis of cell phone menu hierarchies, based on cognitive models of users and easy-to-use optimization techniques. We present an empirical study of user performance on five simple tasks of menu traversal on an example cell phone. Two of the models applied to these tasks, based on GOMS and ACT-R, give good predictions of behavior. We use the empirically supported models to create an effective evaluation and improvement process for menu hierarchies. Our work makes three main contributions: a novel and timely study of a new, very common HCI task; new versions of existing models for accurately predicting performance; and a search procedure to generate menu hierarchies that reduce traversal time, in simulation studies, by about a third.


human factors in computing systems | 2004

Model-based evaluation of cell phone menu interaction

Robert St. Amant; Thomas E. Horton; Frank E. Ritter

Cell phone interfaces are now ubiquitous. In this paper, we describe concepts to support the analysis of cell phone menu hierarchies. We present an empirical study of user performance on five simple tasks of menu traversal on a cell phone. Two models we tested, based on GOMS and ACT-R, give very good predictions of behavior. We use the study results to motivate an effective evaluation process for menu hierarchies. Our work makes several contributions: a novel and timely study of a new, very common HCI task; new models for accurately predicting performance; novel development tools to support such modeling; and a search procedure to generate menu hierarchies that reduce traversal time, in simulation studies, by about a third.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2001

A perception/action substrate for cognitive modeling in HCI

Robert St. Amant; Mark O. Riedl

This article describes a general-purpose programmable substrate designed to allow cognitive modeling systems to interact with off-the-shelf interactive applications. The substrate, called VisMap, improves on conventional approaches, in which a cognitive model interacts with a hand-constructed abstraction, an artificial simulation or an interface tailored specifically to a modeling system. VisMap can be used to construct static scenarios for input to a cognitive model, without requiring its internal modification; alternatively, the system can be integrated with a cognitive model to support direct control of an application.


human factors in computing systems | 1999

A visual medium for programmatic control of interactive applications

Luke Zettlemoyer; Robert St. Amant

The VisMap system provides for visual manipulation of arbitraryoff-the-shelf applications, through an applications graphical userinterface. VisMaps API-independent control has advantages for tasksthat can benefit from direct access to the functions of the userinterface. We describe the design goals and architecture of thesystem, and we discuss two applications, a user-controlled visualscripting program and an autonomous solitaire-playing program,which together demonstrate some of the capabilities and limitationsof the approach.


Human-Computer Interaction | 1999

User interface affordances in a planning representation

Robert St. Amant

This article shows how the concept of affordance in the user interface fits into a well-understood artificial intelligence (AI) model of acting in an environment. In this model AI planning research is used to interpret affordances in terms of the costs associated with the generation and execution of operators in a plan. This article motivates the approach with a brief survey of the affordance literature and its connections to the planning literature and then explores its implications through examples of common user interface mechanisms described in affordances terms. Despite its simplicity, the modeling approach ties together several different threads of practical and theoretical work on affordance into a single conceptual framework.


Communications of The ACM | 2000

Programming by example: visual generalization in programming by example

Robert St. Amant; Henry Lieberman; Richard E. Potter; Luke Zettlemoyer

When a user selects a graphical object on the screen, for example, most PBE systems describe the object in terms of the properties of the underlying application data. If the user selects a link on a Web page, the PBE system might represent that selection based on the link’s HTML properties. Here, we explore a different, and radical, approach—using the visual properties of the interaction elements themselves, including size, shape, color, and appearance—to describe user intentions. Only recently has the speed of image processing made feasible PBE systems’ real-time analysis of screen images. We have not yet realized the goal of a PBE system that uses “visual generalization” but feel this approach is important enough to warrant describing and promoting the idea publicly. (Visual generalization means the inference of general patterns in user behavior based on the visual properties and relationships of user interface objects.) Visual information can supplement the information available from other sources, suggesting new kinds of generalizations not possible from application data alone. In addition, these generalizations can map more closely to user intentions, especially beginning users, who rely on the same visual information when making selections. Moreover, visual generalization can sometimes remove one of the main stumbling blocks—reliance on application


Journal of Computational and Graphical Statistics | 1998

Intelligent Support for Exploratory Data Analysis

Robert St. Amant; Paul R. Cohen

Abstract Exploratory data analysis (EDA) is as much a matter of strategy as of selecting specific statistical operations. We have developed a knowledge-based planning system, called AIDE, to help users with EDA. AIDE strikes a balance between conventional statistical packages, which need guidance for every step in the exploration, and autonomous systems, which leave the user entirely out of the decision-making process. AIDEs processing is based on artificial intelligence planning techniques, which give us a useful means of representing some types of statistical strategy. In this article we describe the design of AIDE and its behavior in exploring a small, complex data set.


Your wish is my command | 2001

Visual generalization in programming by example

Robert St. Amant; Henry Lieberman; Luke Zettlemoyer; Richard E. Potter

Publisher Summary In programming-by-example (PBE) systems, the system records the actions performed by a user in the interface and produces a generalized program that can be used later in analogous examples. A key issue is how to describe the actions and objects selected by the user, which determines what kind of generalizations will be possible. This chapter explores an approach using visual properties of the interaction elements themselves, such as size, shape, color, and appearance of graphical objects to describe user intentions. Visual information can supplement information available from other sources and opens up the possibility of new kinds of generalizations not possible from the application data alone. In addition, these generalizations can map more closely to the intentions of users, especially beginning users, who rely on the same visual information when making selections. Finally, visual generalization can sometimes remove one of the worst obstacles preventing the use of PBE with commercial applications—that is—reliance on application program interfaces (APIs). When necessary, PBE systems can work exclusively from the visual appearance of applications and do not need explicit cooperation from the API.


intelligent user interfaces | 1998

IBOTS: agent control through the user interface

Luke Zettlemoyer; Robert St. Amant; Martin S. Dulberg

This paper describes an ibot, a specialized software agent that exists in the environment of the user interface. Such an agent interacts with applications through the same medium as a human user. Its sensors process screen contents and mouse/keyboard events to monitor the user’s actions and the responses of the environment, while its effecters can generate such events for its own contributions to the interaction. We describe the architecture of our agent and * its algorithms for image processing, event management, and state representation. We illustrate the use of the agent with a small feasibility study in the area of software logging; results are promising for future progress.


smart graphics | 2002

Characterizing tool use in an interactive drawing environment

Robert St. Amant; Thomas E. Horton

The metaphor of tool use for describing the interaction between a human and a computer is pervasive in user interface design. The basic concept of tool use, however, is difficult to define precisely, for HCI purposes or in general. In this paper we argue that a close examination of physical tool use can improve the design of interactive software. We describe a drawing application, HabilisDraw, that incorporates some of the properties we associate with physical tools but are not commonly found in software: persistent tool objects that encapsulate behavior and information, that can be used in conjunction with one another, and that embody rich cues about their appropriate usage. Initial results from formative evaluation suggest that the approach has some promise.

Collaboration


Dive into the Robert St. Amant's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas E. Horton

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Christopher G. Healey

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arpan Chakraborty

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Frank E. Ritter

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Martin S. Dulberg

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Mark O. Riedl

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

R. Michael Young

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

David L. Roberts

North Carolina State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge