Thomas B. Kinsman
Rochester Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas B. Kinsman.
eye tracking research & application | 2010
Daniel F. Pontillo; Thomas B. Kinsman; Jeff B. Pelz
Laboratory eyetrackers, constrained to a fixed display and static (or accurately tracked) observer, facilitate automated analysis of fixation data. Development of wearable eyetrackers has extended environments and tasks that can be studied at the expense of automated analysis. Wearable eyetrackers provide 2D point-of-regard (POR) in scene-camera coordinates, but the researcher is typically interested in some high-level semantic property (e.g., object identity, region, or material) surrounding individual fixation points. The synthesis of POR into fixations and semantic information remains a labor-intensive manual task, limiting the application of wearable eyetracking. We describe a system that segments POR videos into fixations and allows users to train a database-driven, object-recognition system. A correctly trained library results in a very accurate and semi-automated translation of raw POR data into a sequence of objects, regions or materials.
2012 Western New York Image Processing Workshop | 2012
Thomas B. Kinsman; Mark D. Fairchild; Jeff B. Pelz
Using a metric feature space for pattern recognition, data mining, and machine learning greatly simplifies the mathematics because distances are preserved under rotation and translation in feature space. A metric space also provides a “ruler”, or absolute measure of how different two feature vectors are. In the computer vision community color can easily be miss-treated as a metric distance. This paper serves as an introduction to why using a non-metric space is a challenge, and provides details of why color is not a valid Euclidean distance metric.
2010 Western New York Image Processing Workshop | 2010
Thomas B. Kinsman; Peter Bajorski; Jeff B. Pelz
The classification of a large number of images is a familiar problem to the image processing community. It occurs in consumer photography, bioinformatics, biomedical imaging, surveillance, and in the field of mobile eye-tracking studies. During eye-tracking studies, what a person looks at is recorded, and for each frame what the person looked at must then be analyzed and classified. In many cases the data analysis time restricts the scope of the studies. This paper describes the initial use of hierarchical clustering of these images to minimize the time required during analysis. Pre-clustering the images allows the user to classify a large number of images simultaneously. The success of this method is dependent on meeting requirements for human-computer-interactions, which are also discussed.
ubiquitous computing | 2012
Thomas B. Kinsman; Jeff B. Pelz
Using infrared based mobile eye trackers outdoors is difficult, and considered intractable by some [1, 2]. The challenge of bright uncontrolled daylight illumination complicates the process of locating the subjects pupil. To make mobile eye tracking more ubiquitous, we are developing more sophisticated algorithms to find the subjects pupil. We use a semi-supervised process to initiate the pupil tracking, automatically generate an ensemble of models of the pupil for each video, and use multi-frame techniques to help locate the pupil across frames. A mixture of experts (consensus) is used to indicate a good estimate of pupil location. The algorithm presented here details developing work in automatically finding the pupil in situations where there is a significant amount of light reflecting off the eye, when the subject is squinting, and when the pupil is partially occluded. The output of this algorithm will be cascaded into a subsequent stage for exact pupil fitting.
eye tracking research & application | 2014
Thomas B. Kinsman; Jeff B. Pelz
To create input videos for testing pupil detection algorithms for outdoor eye tracking, we develop a simulation of the eye with front-surface reflections of the cornea and the internal refractions of the cornea and refraction at the air/cornea and cornea/aqueous boundaries. The scene and iris are simulated using texture mapping and are alpha-blended to produce the final image of the eye with reflections and refractions. The simulation of refraction is important in order to observe the elliptical shape that the pupil takes on as it goes off axis, and to take into consideration the difference between true pupil position and apparent (entrance) pupil position. Sequences of images are combined to produce input videos for testing the next generation of pupil detection and tracking algorithms, which must sort the pupil out of distracting edges and reflected objects.
Image Processing Workshop (WNYIPW), 2013 IEEE Western New York | 2013
Thomas B. Kinsman; Jeff B. Pelz
It is well known that using the correct features for pattern recognition is far more important than using a sophisticated classifier. A high order classifier, given inadequate features, will produce poor results. Low-level formed are combined to form mid-level features, which have much more discriminating power. Yet, the challenge of feature selection is often neglected in the literature. The literature often assumes that given N low-level features there are 2N-1 ways to use them, which significantly understates the challenge of finding the best features to use and the best ways to combine them. Basic low-level features (input measurements) must be combined in groups to construct features that are relevant for object recognition [1], yet the computational complexity of grouping measurements for input to a pattern recognition system makes the task very difficult. This paper discusses a method for quantifying the total number of ways to group a given number of low-level features for better understanding the feature selection problem.
2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis | 2011
Thomas B. Kinsman; Jeff B. Pelz
This paper describes the initial pre-processing steps used to follow the motions of the human eye in an eye tracking application. The central method models each pixel as a combination of either: a dark pupil pixel, bright highlight pixel, or a neutral pixel. Portable eye tracking involves tracking a subjects pupil over the course of a study. This paper describes very preliminary results from using a mixture model as a processing stage. Technical issues of using a mixture model are discussed. The pixel classifications from the mixture model were fed into a naïve Bayes pupil tracker. Only low-level information is used for pupil identification. No motion tracking is performed, no belief propagation is performed, and no convolutions are computed. The algorithm is well positioned for parallel implementations. The solution surmounts several technical challenges, and initial results are unexpectedly accurate. The technique shows good promise for incorporation into a system for automatic eye-to-scene calibration.
eye tracking research & application | 2012
Thomas B. Kinsman; Karen M. Evans; Glenn Sweeney; Tommy P. Keane; Jeff B. Pelz
Proceedings of SPIE | 2011
Jeff B. Pelz; Thomas B. Kinsman; Karen M. Evans
Archive | 2012
Jeff B. Pelz; Thomas B. Kinsman; Daniel F. Pontillo; Susan M. Munn; Nicholas R. Harrington; Brendon Ben-Kan Hsieh