Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jason S. Babcock is active.

Publication


Featured researches published by Jason S. Babcock.


human vision and electronic imaging conference | 2001

Using human observer eye movements in automatic image classifiers

Alejandro Jaimes; Jeff B. Pelz; Tim Grabowski; Jason S. Babcock; Shih-Fu Chang

We explore the way in which people look at images of different semantic categories and directly relate those results to computational approaches for automatic image classification. Our hypothesis is that the eye movements of human observers differ for images of different semantic categories, and that this information can be effectively used in automatic content-based classifiers. First, we present eye tracking experiments that show the variation in eye movements across different individuals for image of 5 different categories: handshakes, crowd, landscapes, main object in uncluttered background, and miscellaneous. The eye tracking results suggest that similar viewing patterns occur when different subjects view different images in the same semantic category. Using these results, we examine how empirical data obtained from eye tracking experiments across different semantic categories can be integrated with existing computational frameworks, or used to construct new ones. In particular, we examine the Visual Apprentice, a system in which images classifiers are learned form user input as the user defines a multiple level object definition hierarchy based on an object and its parts and labels examples for specific classes. The resulting classifiers are applied to automatically classify new images. Although many eye tracking experiments have been performed, to our knowledge, this is the first study that specifically compares eye movements across categories, and that links category-specific eye tracking results to automatic image classification techniques.


human vision and electronic imaging conference | 2003

Eye tracking observers during color image evaluation tasks

Jason S. Babcock; Jeff B. Pelz; Mark D. Fairchild

Eye movement behavior was investigated for image-quality and chromatic adaptation tasks. The first experiment examined the differences between paired comparison, rank order, and graphical rating tasks, and the second experiment examined the strategies adopted when subjects were asked to select or adjust achromatic regions in images. Results indicate that subjects spent about 4 seconds looking at images in the rank order task, 1.8 seconds per image in the paired comparison task, and 3.5 seconds per image in the graphical rating task. Fixation density maps from the three tasks correlated highly in four of the five images. Eye movements gravitated toward faces and semantic features, and introspective report was not always consistent with fixation density peaks. In adjusting a gray square in an image to appear achromatic, observers spent 95% of their time looking only at the patch. When subjects looked around (less than 5% of the time), they did so early. Foveations were directed to semantic features, not achromatic regions, indicating that people do not seek out near-neutral regions to verify that their patch appears achromatic relative to the scene. Observers also do not scan the image in order to adapt to the average chromaticity of the image. In selecting the most achromatic region in an image, viewers spent 60% of the time scanning the scene. Unlike the achromatic adjustment task, foveations were directed to near-neutral regions, showing behavior similar to a visual search task.


electronic imaging | 2003

Experimental congruence of interval scale production from paired comparisons and ranking for image evaluation

John C. Handley; Jason S. Babcock; Jeff B. Pelz

Image evaluation tasks are often conducted using paired comparisons or ranking. To elicit interval scales, both methods rely on Thurstones Law of Comparative Judgment in which objects closer in psychological space are more often confused in preference comparisons by a putative discriminal random process. It is often debated whether paired comparisons and ranking yield the same interval scales. An experiment was conducted to assess scale production using paired comparisons and ranking. For this experiment a Pioneer Plasma Display and Apple Cinema Display were used for stimulus presentation. Observers performed rank order and paired comparisons tasks on both displays. For each of five scenes, six images were created by manipulating attributes such as lightness, chroma, and hue using six different settings. The intention was to simulate the variability from a set of digital cameras or scanners. Nineteen subjects, (5 females, 14 males) ranging from 19-51 years of age participated in this experiment. Using a paired comparison model and a ranking model, scales were estimated for each display and image combination yielding ten scale pairs, ostensibly measuring the same psychological scale. The Bradley-Terry model was used for the paired comparisons data and the Bradley-Terry-Mallows model was used for the ranking data. Each model was fit using maximum likelihood estimation and assessed using likelihood ratio tests. Approximate 95% confidence intervals were also constructed using likelihood ratios. Model fits for paired comparisons were satisfactory for all scales except those from two image/display pairs; the ranking model fit uniformly well on all data sets. Arguing from overlapping confidence intervals, we conclude that paired comparisons and ranking produce no conflicting decisions regarding ultimate ordering of treatment preferences, but paired comparisons yield greater precision at the expense of lack-of-fit.


eye tracking research & application | 2004

Building a lightweight eyetracking headgear

Jason S. Babcock; Jeff B. Pelz


eye tracking research & application | 2000

Extended tasks elicit complex eye movement patterns

Jeff B. Pelz; Roxanne L. Canosa; Jason S. Babcock


Storage and Retrieval for Image and Video Databases | 2000

Portable eyetracking: a study of natural eye movements

Jeff B. Pelz; Roxanne L. Canosa; Diane Kucharczyk; Jason S. Babcock; Amy Silver; Daisei Konno


eye tracking research & application | 2010

Head-mounted eye-tracking of infants' natural interactions: a new method

John M. Franchak; Kari S. Kretch; Kasey C. Soska; Jason S. Babcock; Karen E. Adolph


international symposium on wearable computers | 2004

My own private kiosk: privacy-preserving public displays

Marc Eaddy; Gábor Blaskó; Jason S. Babcock; Steven Feiner


PICS | 2003

Eye Tracking Observers During Rank Order, Paired Comparison, and Graphical Rating Tasks.

Jason S. Babcock; Jeff B. Pelz; Mark D. Fairchild


electronic imaging | 2002

How people look at pictures before, during, and after scene capture: Buswell revisited

Jason S. Babcock; Marianne Lipps; Jeff B. Pelz

Collaboration


Dive into the Jason S. Babcock's collaboration.

Top Co-Authors

Avatar

Jeff B. Pelz

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Roxanne L. Canosa

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amy Silver

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daisei Konno

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Diane Kucharczyk

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Marianne Lipps

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mark D. Fairchild

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge