Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph H. Goldberg is active.

Publication


Featured researches published by Joseph H. Goldberg.


eye tracking research & application | 2000

Identifying fixations and saccades in eye-tracking protocols

Dario D. Salvucci; Joseph H. Goldberg

The process of fixation identification—separating and labeling fixations and saccades in eye-tracking protocols—is an essential part of eye-movement data analysis and can have a dramatic impact on higher-level analyses. However, algorithms for performing fixation identification are often described informally and rarely compared in a meaningful way. In this paper we propose a taxonomy of fixation identification algorithms that classifies algorithms in terms of how they utilize spatial and temporal information in eye-tracking protocols. Using this taxonomy, we describe five algorithms that are representative of different classes in the taxonomy and are based on commonly employed techniques. We then evaluate and compare these algorithms with respect to a number of qualitative characteristics. The results of these comparisons offer interesting implications for the use of the various algorithms in future work.


eye tracking research & application | 2002

Eye tracking in web search tasks: design implications

Joseph H. Goldberg; Mark J. Stimson; Marion Lewenstein; Neil G. Scott; Anna M. Wichansky

An eye tracking study was conducted to evaluate specific design features for a prototype web portal application. This software serves independent web content through separate, rectangular, user-modifiable portlets on a web page. Each of seven participants navigated across multiple web pages while conducting six specific tasks, such as removing a link from a portlet. Specific experimental questions included (1) whether eye tracking-derived parameters were related to page sequence or user actions preceding page visits, (2) whether users were biased to traveling vertically or horizontally while viewing a web page, and (3) whether specific sub-features of portlets were visited in any particular order. Participants required 2-15 screens, and from 7-360+ seconds to complete each task. Based on analysis of screen sequences, there was little evidence that search became more directed as screen sequence increased. Navigation among portlets, when at least two columns exist, was biased towards horizontal search (across columns) as opposed to vertical search (within column). Within a portlet, the header bar was not reliably visited prior to the portlets body, evidence that header bars are not reliably used for navigation cues. Initial design recommendations emphasized the need to place critical portlets on the left and top of the web portal area, and that related portlets do not need to appear in the same column. Further experimental replications are recommended to generalize these results to other applications.


Ergonomics | 1999

The effect of mental workload on the visual field size and shape

Esa Rantanen; Joseph H. Goldberg

Mental workload is known to reduce the area of ones visual field, but little is known about its effects on the shape of the visual field. Considering this, the visual fields of 13 subjects were measured concurrently under three levels of mental workload using a Goldmann visual perimeter. Tone counting tasks were employed to induce mental workload, avoiding interference with visual performance. Various methods of shape measurement and analysis were used to investigate the variation of the shape of the visual field as a function of mental load. As expected, the mean area of visual fields reduced to 92.2% in the medium workload condition and to 86.41% under heavy workload, compared to light load condition. This tunnelling effect was not uniform, but resulted in statistically significant shape distortion as well, as measured by the majority of the 12 shape indices used here. These results have visual performance implications in many tasks that are susceptible to changes in visual fields and peripheral vision. Knowledge of the dynamics of the visual field as a function of mental workload can offer significant advantages also in mathematical modelling of visual search.


Archive | 2003

Eye Tracking in Usability Evaluation: A Practitioner's Guide

Joseph H. Goldberg; Anna M. Wichansky

Publisher Summary This chapter provides a practical guide for either the software usability engineer who considers the benefits of eye tracking or the eye tracking specialist who considers software usability evaluation as an application. Usability evaluation is defined rather loosely by industry as any of several applied techniques where users interact with a product, system, or service and some behavioral data are collected. Usability goals are often stipulated as criteria, and an attempt is made to use test participants similar to the target-market users. The chapter discusses methodological issues first in usability evaluation and then in the eye-tracking realm. An integrated knowledge of both of these areas is beneficial for the experimenter who conducts eye tracking as part of a usability evaluation. Within each of these areas, major issues are presented in the chapter by a rhetorical questioning style. By presenting the usability evaluation, the practical use of an eye-tracking methodology is placed into a proper and realistic perspective.


workshop on beyond time and errors | 2010

Comparing information graphics: a critical look at eye tracking

Joseph H. Goldberg; Jonathan I. Helfman

Effective graphics are essential for understanding complex information and completing tasks. To assess graphic effectiveness, eye tracking methods can help provide a deeper understanding of scanning strategies that underlie more traditional, high-level accuracy and task completion time results. Eye tracking methods entail many challenges, such as defining fixations, assigning fixations to areas of interest, choosing appropriate metrics, addressing potential errors in gaze location, and handling scanning interruptions. Special considerations are also required designing, preparing, and conducting eye tracking studies. An illustrative eye tracking study was conducted to assess the differences in scanning within and between bar, line, and spider graphs, to determine which graphs best support relative comparisons along several dimensions. There was excessive scanning to locate the correct bar graph in easier tasks. Scanning across bar and line graph dimensions before comparing across graphs was evident in harder tasks. There was repeated scanning between the same dimension of two spider graphs, implying a greater cognitive demand from scanning in a circle that contains multiple linear dimensions, than from scanning the linear axes of bar and line graphs. With appropriate task design and targeted analysis metrics, eye tracking techniques can illuminate visual scanning patterns hidden by more traditional time and accuracy results.


eye tracking research & application | 2010

Scanpath clustering and aggregation

Joseph H. Goldberg; Jonathan I. Helfman

Eye tracking specialists often need to understand and represent aggregate scanning strategies, but methods to identify similar scanpaths and aggregate multiple scanpaths have been elusive. A new method is proposed here to identify scanning strategies by aggregating groups of matching scanpaths automatically. A dataset of scanpaths is first converted to sequences of viewed area names, which are then represented in a dotplot. Matching sequences in the dotplot are found with linear regressions, and then used to cluster the scanpaths hierarchically. Aggregate scanning strategies are generated for each cluster and presented in an interactive dendrogram. While the clustering and aggregation method works in a bottom-up fashion, based on pair-wise matches, a top-down extension is also described, in which a scanning strategy is first input by cursor gesture, then matched against the dataset. The ability to discover both bottom-up and top-down strategy matches provides a powerful tool for scanpath analysis, and for understanding group scanning strategies.


Information Visualization | 2011

Eye tracking for visualization evaluation: reading values on linear versus radial graphs

Joseph H. Goldberg; Jonathan Helfman

An eye tracking methodology can help uncover subtle cognitive processing stages that are otherwise difficult to observe in visualization evaluation studies. Pros and cons of eye tracking methods are discussed here, including common analysis metrics. One example metric is the initial time at which all elements of a visualization that are required to complete a task have been viewed. An illustrative eye tracking study was conducted to compare how radial and linear graphs support value lookup tasks for both one and two data-dimensions. Linear and radial versions of bar, line, area, and scatter graphs were presented to 32 participants, who each completed a counterbalanced series of tasks. Tasks were completed more quickly on linear graphs than on those with a radial layout. Scanpath analysis revealed that a three-stage processing model was supported: (1) find desired data dimension, (2) find its datapoint, and (3) map the datapoint to its value. Mapping a datapoint to its value was slower on radial than linear graphs, probably because eyes need to follow a circular, as opposed to a linear path. Finding a datapoint within a dimension was harder using line and area graphs than bar and scatter graphs, possibly due to visual confusion of the line representing a data series. Although few errors were made, eye tracking was also used here to classify error strategies. As a result of these analyses, guidelines are proposed for the design of radial and linear graphs.


eye tracking research & application | 2010

Visual scanpath representation

Joseph H. Goldberg; Jonathan I. Helfman

Eye tracking scanpaths contain information about how people see, but traditional tangled, overlapping scanpath representations provide little insight about scanning strategies. The present work describes and extends several compact visual scanpath representations that can provide additional insight about individual and aggregate/multiple scanning strategies. Three categories of representations are introduced: (1) Scaled traces are small images of scanpaths as connected saccades, allowing the comparison of relative fixation densities and distributions of saccades. (2) Time expansions, substituting ordinal position for either the scanpaths x or y-coordinates, can uncover otherwise subtle horizontal or vertical reversals in visual scanning. (3) Radial plots represent scanpaths as a set of radial arms about an origin, with each arm representing saccade counts or lengths within a binned set of absolute or relative angles. Radial plots can convey useful shape characteristics of scanpaths, and can provide a basis for new metrics. Nine different prototype scanning strategies were represented by these plots, then heuristics were developed to classify the major strategies. The heuristics were subsequently applied to real scanpath data, to identify strategy trends. Future work will further automate the identification of scanning strategies to provide researchers with a tool to uncover and diagnose scanning-related challenges.


Behavior Research Methods Instruments & Computers | 1995

Eye-gaze-contingent control of the computer interface: Methodology and example for zoom detection

Joseph H. Goldberg; Jack C. Schryver

Discrimination of user intent at the computer interface solely from eye gaze can provide a powerful tool, benefiting many applications. An exploratory methodology for discriminating zoom-in, zoom-out, and no-zoom intent was developed for such applications as telerobotics, disability aids, weapons systems, and process control interfaces. Using an eye-tracking system, real-time eye-gaze locations on a display are collected. Using off-line procedures, these data are clustered, using minimum spanning tree representations, and then characterized. The cluster characteristics are fed into a multiple linear discriminant analysis, which attempts to discriminate the zoom-in, zoom-out, and no-zoom conditions. The methodologies, algorithms, and experimental data collection procedure are described, followed by example output from the analysis programs. Although developed specifically for the discrimination of zoom conditions, the methodology has broader potential for discrimination of user intent in other interface operations.


International Journal of Industrial Ergonomics | 1989

Knowledge of results in visual inspection decisions: Sensitivity or criterion effect?

John Micalizzi; Joseph H. Goldberg

Whereas knowledge of results (KR), that knowledge received on the outcome of ones responses, has been shown to facilitate learning of visual search in inspection, further research is required to determine its influence upon decision-making, the other aspect of inspection. Twenty subjects, randomly assigned to either a KR or No-KR group, performed a visual inspection task; defect probability and discriminability were manipulated within subjects, and sequence of discriminability levels was manipulated between subjects. Via a signal detection model interpretation, KR increased sensitivity but had mixed effects on response criterion. The sensitivity changes were interpreted via attention theory, and a cognitive model of KR utilization was presented.

Collaboration


Dive into the Joseph H. Goldberg's collaboration.

Top Co-Authors

Avatar

Andris Freivalds

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xerxes P. Kotval

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jack C. Schryver

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

R. Darin Ellis

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

David J. Cannon

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Esa Rantanen

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Micalizzi

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge