Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anthony J. Hornof is active.

Publication


Featured researches published by Anthony J. Hornof.


Behavior Research Methods Instruments & Computers | 2002

Cleaning up systematic error in eye-tracking data by using required fixation locations

Anthony J. Hornof; Tim Halverson

In the course of running an eye-tracking experiment, one computer system or subsystem typically presents the stimuli to the participant and records manual responses, and another collects the eye movement data, with little interaction between the two during the course of the experiment. This article demonstrates how the two systems can interact with each other to facilitate a richer set of experimental designs and applications and to produce more accurate eye tracking data. In an eye-tracking study, a participant is periodically instructed to look at specific screen locations, orexplicit required fixation locations (RFLs), in order to calibrate the eye tracker to the participant. The design of an experimental procedure will also often produce a number ofimplicit RFLs—screen locations that the participant must look at within a certain window of time or at a certain moment in order to successfully and correctly accomplish a task, but without explicit instructions to fixate those locations. In these windows of time or at these moments, the disparity between the fixations recorded by the eye tracker and the screen locations corresponding to implicit RFLs can be examined, and the results of the comparison can be used for a variety of purposes. This article shows how the disparity can be used to monitor the deterioration in the accuracy of the eye tracker calibration and to automatically invoke a re-calibration procedure when necessary. This article also demonstrates how the disparity will vary across screen regions and participants and how each participant’s uniqueerror signature can be used to reduce the systematic error in the eye movement data collected for that participant.


user interface software and technology | 1995

GLEAN: a computer-based tool for rapid GOMS model usability evaluation of user interface designs

David E. Kieras; Scott d. Wood; Kasem Abotel; Anthony J. Hornof

Engineering models of human performance permit some aspects of usability of interface designs to be predicted from an analysis of the task, and thus can replace to some extent expensive user testing data. The best developed such tools are GOMS models, which have been shown to be accurate and effective in predicting usability of the procedural aspects of interface designs. This paper describes a computer-based tool, GLEAN, that generates quantitative predictions from a supplied GOMS model and a set of benchmark tasks. GLEAN is demonstrated to reproduce the results of a case study of GOMS model application with considerable time savings over both manual modeling as well as empirical testing.


human factors in computing systems | 1997

Cognitive modeling reveals menu search in both random and systematic

Anthony J. Hornof; David E. Kieras

Appears in the ACM CHI‘97 Conference Proceedings. ABSTRACT To understand how people search for a known target item in an unordered pull-down menu, this research presents cognitive models that vary serial versus parallel processing of menu items, random versus systematic search, and different numbers of menu items fitting into the fovea simultaneously. Varying these conditions, models were constructed and run using the EPIC cognitive architecture. The selection times predicted by the models are compared with selection times of human subjects performing the same menu task. Comparing the predicted and observed times, the models reveal that 1) people process more than one menu item at a time, and 2) people search menus using both random and systematic search strategies.


human factors in computing systems | 2005

EyeDraw: enabling children with severe motor impairments to draw with their eyes

Anthony J. Hornof; Anna C. Cavender

EyeDraw is a software program that, when run on a computer with an eye tracking device, enables children with severe motor disabilities to draw pictures by just moving their eyes. This paper discusses the motivation for building the software, how the program works, the iterative development of two versions of the software, user testing of the two versions by people with and without disabilities, and modifications to the software based on user testing. Feedback from both children and adults with disabilities, and from their caregivers, was especially helpful in the design process. The project identifies challenges that are unique to controlling a computer with the eyes, and unique to writing software for children with severe motor impairments.


Human-Computer Interaction | 2004

Cognitive strategies for the visual search of hierarchical computer displays

Anthony J. Hornof

This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined: unlabeled layouts that contain multiple groups of items but no group headings, labeled layouts in which items are grouped and each group has a useful heading, and a target-only layout that contains just one item. A number of plausible strategies were proposed for each layout. Each strategy was programmed into the EPIC cognitive architecture, producing models that simulate the human visual-perceptual, oculomotor, and cognitive processing required for the task. The models generate search time predictions. For unlabeled layouts, the mean layout search times are predicted by a purely random search strategy, and the more detailed positional search times are predicted by a noisy systematic strategy. The labeled layout search times are predicted by a hierarchical strategy in which first the group labels are systematically searched, and then the contents of the target group. The target-only layout search times are predicted by a strategy in which the eyes move directly to the sudden appearance of the target. The models demonstrate that human visual search performance can be explained largely in terms of the cognitive strategy that is used to coordinate the relevant perceptual and motor processes, a clear and useful visual hierarchy triggers a fundamentally different visual search strategy and effectively gives the user greater control over the visual navigation, and cognitive strategies will be an important component of a predictive visual search tool. The models provide insights pertaining to the visual-perceptual and oculomotor processes involved in visual search and contribute to the science base needed for predictive interface analysis.


conference on computers and accessibility | 2004

Eyedraw: a system for drawing pictures with eye movements

Anthony J. Hornof; Anna C. Cavender; Rob Hoselton

This paper describes the design and development of EyeDraw, a software program that will enable children with severe mobility impairments to use an eye tracker to draw pictures with their eyes so that they can have the same creative developmental experiences as nondisabled children. EyeDraw incorporates computer-control and software application advances that address the special needs of people with motor impairments, with emphasis on the needs of children. The contributions of the project include (a) a new technique for using the eyes to control the computer when accomplishing a spatial task, (b) the crafting of task-relevant functionality to support this new technique in its application to drawing pictures, and (c) a user-tested implementation of the idea within a working computer program. User testing with nondisabled users suggests that we have designed and built an eye-cursor and eye drawing control system that can be used by almost anyone with normal control of their eyes. The core technique will be generally useful for a range of computer control tasks such as selecting a group of icons on the desktop by drawing a box around them.


human factors in computing systems | 2005

A comparison of LSA, wordNet and PMI-IR for predicting user click behavior

Ishwinder Kaur; Anthony J. Hornof

A predictive tool to simulate human visual search behavior would help interface designers inform and validate their design. Such a tool would benefit from a semantic component that would help predict search behavior even in the absence of exact textual matches between goal and target. This paper discusses a comparison of three semantic systems-LSA, WordNet and PMI-IR-to evaluate their performance in predicting the link that people would select given an information goal and a webpage. PMI-IR best predicted human performance as observed in a user study.


human factors in computing systems | 1999

Cognitive modeling demonstrates how people use anticipated location knowledge of menu items

Anthony J. Hornof; David E. Kieras

This research presents cognitive models of a person selecting anitem from a familiar, ordered, pull-down menu. Two different modelsprovide a good fit with human data and thus two different possibleexplanations for the low- level cognitive processes involved in thetask. Both models assert that people make an initial eye and handmovement to an anticipated target location without waiting for themenu to appear. The first model asserts that a person knows theexact location of the target item before the menu appears, but themodel uses nonstandard Fitts law coefficients to predict mousepointing time. The second model asserts that a person would onlyknow the approximate location of the target item, and the modeluses Fitts law coefficients better supported by the literature.This research demonstrates that people can develop considerableknowledge of locations in a visual task environment, and that morework regarding Fitts law is needed.


Human-Computer Interaction | 2013

A Computational Model of “Active Vision” for Visual Search in Human–Computer Interaction

Tim Halverson; Anthony J. Hornof

Human visual search plays an important role in many human–computer interaction (HCI) tasks. Better models of visual search are needed not just to predict overall performance outcomes, such as whether people will be able to find the information needed to complete an HCI task, but to understand the many human processes that interact in visual search, which will in turn inform the detailed design of better user interfaces. This article describes a detailed instantiation, in the form of a computational cognitive model, of a comprehensive theory of human visual processing known as “active vision” (Findlay & Gilchrist, 2003). The computational model is built using the Executive Process-Interactive Control cognitive architecture. Eye-tracking data from three experiments inform the development and validation of the model. The modeling asks—and at least partially answers—the four questions of active vision: (a) What can be perceived in a fixation? (b) When do the eyes move? (c) Where do the eyes move? (d) What information is integrated between eye movements? Answers include: (a) Items nearer the point of gaze are more likely to be perceived, and the visual features of objects are sometimes misidentified. (b) The eyes move after the fixated visual stimulus has been processed (i.e., has entered working memory). (c) The eyes tend to go to nearby objects. (d) Only the coarse spatial information of what has been fixated is likely maintained between fixations. The model developed to answer these questions has both scientific and practical value in that the model gives HCI researchers and practitioners a better understanding of how people visually interact with computers, and provides a theoretical foundation for predictive analysis tools that can predict aspects of that interaction.


human factors in computing systems | 2009

Designing with children with severe motor impairments

Anthony J. Hornof

Children with severe motor impairments such as with disabilities resulting from severe cerebral palsy benefit greatly from assistive technology, but very little guidance is available on how to collaborate with this population as partners in the design of such technology. To explore how to facilitate such collaborations, a field-based participant observation study, as well as structured and unstructured interviews, were conducted at a home for children with severe disabilities. Team-building collaborative design activities were pursued. Guidelines are proposed for how to collaborate with children with severe motor impairments.

Collaboration


Dive into the Anthony J. Hornof's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Moira Burke

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Nicholas Gorman

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Troy Rogers

University of Virginia

View shared research outputs
Top Co-Authors

Avatar

Andrew Howes

University of Manchester

View shared research outputs
Researchain Logo
Decentralizing Knowledge