Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tim Halverson is active.

Publication


Featured researches published by Tim Halverson.


Behavior Research Methods Instruments & Computers | 2002

Cleaning up systematic error in eye-tracking data by using required fixation locations

Anthony J. Hornof; Tim Halverson

In the course of running an eye-tracking experiment, one computer system or subsystem typically presents the stimuli to the participant and records manual responses, and another collects the eye movement data, with little interaction between the two during the course of the experiment. This article demonstrates how the two systems can interact with each other to facilitate a richer set of experimental designs and applications and to produce more accurate eye tracking data. In an eye-tracking study, a participant is periodically instructed to look at specific screen locations, orexplicit required fixation locations (RFLs), in order to calibrate the eye tracker to the participant. The design of an experimental procedure will also often produce a number ofimplicit RFLs—screen locations that the participant must look at within a certain window of time or at a certain moment in order to successfully and correctly accomplish a task, but without explicit instructions to fixate those locations. In these windows of time or at these moments, the disparity between the fixations recorded by the eye tracker and the screen locations corresponding to implicit RFLs can be examined, and the results of the comparison can be used for a variety of purposes. This article shows how the disparity can be used to monitor the deterioration in the accuracy of the eye tracker calibration and to automatically invoke a re-calibration procedure when necessary. This article also demonstrates how the disparity will vary across screen regions and participants and how each participant’s uniqueerror signature can be used to reduce the systematic error in the eye movement data collected for that participant.


Human-Computer Interaction | 2013

A Computational Model of “Active Vision” for Visual Search in Human–Computer Interaction

Tim Halverson; Anthony J. Hornof

Human visual search plays an important role in many human–computer interaction (HCI) tasks. Better models of visual search are needed not just to predict overall performance outcomes, such as whether people will be able to find the information needed to complete an HCI task, but to understand the many human processes that interact in visual search, which will in turn inform the detailed design of better user interfaces. This article describes a detailed instantiation, in the form of a computational cognitive model, of a comprehensive theory of human visual processing known as “active vision” (Findlay & Gilchrist, 2003). The computational model is built using the Executive Process-Interactive Control cognitive architecture. Eye-tracking data from three experiments inform the development and validation of the model. The modeling asks—and at least partially answers—the four questions of active vision: (a) What can be perceived in a fixation? (b) When do the eyes move? (c) Where do the eyes move? (d) What information is integrated between eye movements? Answers include: (a) Items nearer the point of gaze are more likely to be perceived, and the visual features of objects are sometimes misidentified. (b) The eyes move after the fixated visual stimulus has been processed (i.e., has entered working memory). (c) The eyes tend to go to nearby objects. (d) Only the coarse spatial information of what has been fixated is likely maintained between fixations. The model developed to answer these questions has both scientific and practical value in that the model gives HCI researchers and practitioners a better understanding of how people visually interact with computers, and provides a theoretical foundation for predictive analysis tools that can predict aspects of that interaction.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2004

Local Density Guides Visual Search: Sparse Groups are First and Faster

Tim Halverson; Anthony J. Hornof

Visual search in an important aspect of many tasks, but it not well understood how layout design affects visual search. This research uses reaction time data, eye movement data, and computational cognitive modeling to investigate the effect of local density on the visual search of structured layouts of words. Layouts were all-sparse, all-dense, or mixed. Participants found targets in sparse groups faster, and searched sparse groups before dense groups. Participants made slightly more fixations per word in sparse groups, but these were much shorter fixations. The modeling suggests that participants may have attempted to process words within a consistent visual angle regardless of density, but that they were more likely to miss the target if the target was in a dense group. Furthermore, it was found that the participants tended to search sparse groups before dense groups. When combining densities in a layout, it may be beneficial to place important information in sparse groups.


human factors in computing systems | 2004

Link colors guide a search

Tim Halverson; Anthony J. Hornof

While much basic research exists on the effects of various visual properties on visual search, the application of such research to real-world tasks is lacking. The purpose of this research is to address the lack of empirical validation for design guidelines that affect visual search. One common design element used in Web interface design is link color. The general research question asked is how text color affects visual search. This research demonstrates, with reaction time and eye movement analysis, the dramatic but imperfect control a designer has on guiding the attention of users with text color. Experimental support for the differentiation of visited link colors is presented, along with analyses of the advantages provided by differentiating link colors.


new interfaces for musical expression | 2007

EyeMusic: performing live music and multimedia compositions with eye movements

Anthony J. Hornof; Troy Rogers; Tim Halverson

In this project, eye tracking researchers and computer music composers collaborate to create musical compositions that are played with the eyes. A commercial eye tracker (LC Technologies Eyegaze) is connected to a music and multimedia authoring environment (Max/MSP/Jitter). The project addresses issues of both noise and control: How will the performance benefit from the noise inherent in eye trackers and eye movements, and to what extent should the composition encourage the performer to try to control a specific musical outcome? Providing one set of answers to these two questions, the authors create an eye-controlled composition, EyeMusic v1.0, which was selected by juries for live performance at computer music conferences.


human factors in computing systems | 2008

The effects of semantic grouping on visual search

Tim Halverson; Anthony J. Hornof

This paper reports on work-in-progress to better understand how users visually interact with hierarchically organized semantic information. Experimental reaction time and eye movement data are reported that give insight into strategies people employ while searching visual layouts containing words that are either grouped by category (i.e. semantically cohesive) or randomly grouped. Additionally, sometimes the category labels of the cohesive groups are displayed as part of the group. Preliminary results suggest that: (a) When groups are cohesive, people tend to search labeled and unlabeled layouts similarly. (b) People seem to trust the categorical information of labels more than non-labels. This work will be used to extend current computational models of visual search to better predict users visual interaction with interfaces.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2012

Classifying Workload with Eye Movements in a Complex Task

Tim Halverson; Justin R. Estepp; James C. Christensen; Jason W. Monnin

Eye movements and pupil size have been used to assess workload in previous research. However, the results presented in the literature vary, and the tasks have been too simple at times or the experimental conditions (e.g. lighting) too tightly controlled to determine if the use of eye data to assess workload is useful in real-world contexts. This research investigates the use of ten eye movement, eyelid, or pupil related metrics as input to support vector machines for classifying workload in a complex task. The results indicate that both pupil size and percentage of eye closure are useful for predicting workload. Further, the combination of the two metrics increases the robustness and accuracy of the workload predictions.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2008

Transforming Object Locations on a 2D Visual Display into Cued Locations in 3D Auditory Space

Anthony J. Hornof; Tim Halverson; Andy Isaacson; Erik Brown

An empirical study explored the extent to which people can map locations in auditory space to locations on a visual display for four different transformations (or mappings) between auditory and visual surfaces. Participants were trained in each of four transformations: horizontal square, horizontal arc, vertical square, and vertical spherical surface. On each experimental trial, a sound was played through headphones connected to a spatialized sound system that uses a non-individualized head-related transfer function. The participants task was to determine, using one transformation at a time, which of two objects on a visual display corresponded to the location of the sound. Though the two vertical transformations provided a more direct stimulus-response compatibility with the visual display, the two horizontal transformations made better use of the human auditory systems ability to localize sound, and resulted in better performance. Eye movements were analyzed, and it was found that the horizontal arc transformation provided the best auditory cue for moving the eyes to the correct visual target location with a single saccade.


new interfaces for musical expression | 2007

EyeMusic v1.0

Anthony J. Hornof; Troy Rogers; Tim Halverson

EyeMusic v1.0 explores how eye movements can be sonified to show where a person is looking using sound, and how this sonification can be used in real time to create music. EyeMusic provides a unique physical interface to an electronic music composition. An eye tracking device (the LC Technologies Eyegaze Communication System) reports where the performer is looking on the computer screen, as well as other parameters pertaining to the status of the eyes. The eye tracker reports these data in real time to a computer program (written using Max / MSP/Jitter). The computer program generates and modifies sounds and images based on these data. While the eye is, in ordinary human usage, an organ of perception, EyeMusic v1.0 allows for it to be a manipulator as well. EyeMusic creates an unusual feedback loop. The performer may be motivated to look at a physical location either to process it visually (the usual motivation for an eye movement) or to create a sound (a new motivation). These two motivations can work together to achieve perceptualmotor harmony and also to create music along the way. The two motivations can also generate some conflict, though, as when the gaze moves close to an object to set up a specific sonic or visual effect, but resists the inclination to look directly at the object. Through it all, EyeMusic explores how the eyes can be used to directly perform a musical composition.


human factors in computing systems | 2003

Cognitive strategies and eye movements for searching hierarchical computer displays

Anthony J. Hornof; Tim Halverson

Collaboration


Dive into the Tim Halverson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Troy Rogers

University of Virginia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian McClimens

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James C. Christensen

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Justin R. Estepp

Air Force Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge