Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Timothy F. Gee is active.

Publication


Featured researches published by Timothy F. Gee.


Pattern Recognition | 2005

Face recognition using direct, weighted linear discriminant analysis and modular subspaces

Jeffery R. Price; Timothy F. Gee

We present a modular linear discriminant analysis (LDA) approach for face recognition. A set of observers is trained independently on different regions of frontal faces and each observer projects face images to a lower-dimensional subspace. These lower-dimensional subspaces are computed using LDA methods, including a new algorithm that we refer to as direct, weighted LDA or DW-LDA. DW-LDA combines the advantages of two recent LDA enhancements, namely direct LDA (D-LDA) and weighted pairwise Fisher criteria. Each observer performs recognition independently and the results are combined using a simple sum-rule. Experiments compare the proposed approach to other face recognition methods that employ linear dimensionality reduction. These experiments demonstrate that the modular LDA method performs significantly better than other linear subspace methods. The results also show that D-LDA does not necessarily perform better than the well-known principal component analysis followed by LDA approach. This is an important and significant counterpoint to previously published experiments that used smaller databases. Our experiments also indicate that the new DW-LDA algorithm is an improvement over D-LDA.


Seventh International Conference and Exposition on Engineering, Construction, Operations, and Business in Space | 2000

Position and orientation tracking system

Barry L. Burks; Fred W. DePiero; Gary A. Armstrong; John F. Jansen; Richard C. Muller; Timothy F. Gee

A position and orientation tracking system presents a laser scanning appaus having two measurement pods, a control station, and a detector array. The measurement pods can be mounted in the dome of a radioactive waste storage silo. Each measurement pod includes dual orthogonal laser scanner subsystems. The first laser scanner subsystem is oriented to emit a first line laser in the pan direction. The second laser scanner is oriented to emit a second line laser in the tilt direction. Both emitted line lasers scan planes across the radioactive waste surface to encounter the detector array mounted on a target robotic vehicle. The angles of incidence of the planes with the detector array are recorded by the control station. Combining measurements describing each of the four planes provides data for a closed form solution of the algebraic transform describing the position and orientation of the target robotic vehicle.


applied imagery pattern recognition workshop | 2001

Towards robust face recognition from video

Jeffery R. Price; Timothy F. Gee

A novel, template-based method for face recognition is presented. The goals of the proposed method are to integrate multiple observations for improved robustness and to provide auxiliary confidence data for subsequent use in an automated video surveillance system. The proposed framework consists of a parallel system of classifiers, referred to as observers, where each observer is trained on one face region. The observer outputs are combined to yield the final recognition result. Three of the four confounding factors expression, illumination, and decoration-are specifically addressed in this paper The extension of the proposed approach to address the fourth confounding factor-pose-is straightforward and well supported in previous work. A further contribution of the proposed approach is the computation of a revealing confidence measure. This confidence measure will aid the subsequent application of the proposed method to video surveillance scenarios. Results are reported for a database comprising 676 images of 160 subjects under a variety of challenging circumstances. These results indicate significant performance improvements over previous methods and demonstrate the usefulness of the confidence data.


computer vision and pattern recognition | 2007

On the Efficacy of Correcting for Refractive Effects in Iris Recognition

Jeffery R. Price; Timothy F. Gee; Vincent C. Paquit; Kenneth W. Tobin

In this study, we aim to determine if iris recognition accuracy might be improved by correcting for the refractive effects of the human eye when the optical axes of the eye and camera are misaligned. We undertake this investigation using an anatomically-approximated, three-dimensional model of the human eye and ray-tracing. We generate synthetic iris imagery from different viewing angles using first a simple pattern of concentric rings on the iris for analysis, and then synthetic texture maps on the iris for experimentation. We estimate the distortion from the concentric-ring iris images and use the results to guide the sampling of textured iris images that are distorted by refraction. Using the well-known Gabor filter phase quantization approach, our model-based results indicate that the Hamming distances between iris signatures from different viewing angles can be significantly reduced by accounting for refraction. Over our experimental conditions comprising viewing angles from 0 to 60 degrees, we observe a median reduction in Hamming distance of 27.4% and a maximum reduction of 70.0% when we compensate for refraction. Maximum improvements are observed at viewing angles o/20deg-25deg.


conference on image and video communications and processing | 2000

Multiframe combination and blur deconvolution of video data

Timothy F. Gee; Thomas P. Karnowski; Kenneth W. Tobin

In this paper we present a technique that may be applied to surveillance video data to obtain a higher-quality image from a sequence of lower-quality images. The increase in quality is derived through a deconvolution of optical blur and/or an increase in spatial sampling. To process sequences of real forensic video data, three main steps are required: frame and region selection, displacement estimation, and original image estimation. A user-identified region-of-interest (ROI) is compared to other frames in the sequence. The areas that are suitable matches are identified and used for displacement estimation. The calculated displacement vector images describe the transformation of the desired high-quality image to the observed low quality images. The final stage is based on the Projection Onto Convex Sets (POCS) super-resolution approach of Patti, Sezan, and Tekalp. This stage performs a deconvolution using the observed image sequence, displacement vectors, and an a priori known blur model. A description of the algorithmic steps are provided, and an example input sequence with corresponding output image is given.


international conference on acoustics, speech, and signal processing | 2001

Blur estimation in limited-control environments

Jeffery R. Price; Timothy F. Gee; Kenneth W. Tobin

We propose a method to estimate the blur of a fixed imaging system, without control of camera position or lighting, using an inexpensive target. Such a method is applicable, for example, in the restoration of surveillance imagery where the imaging system is available, but with only limited control of the imaging conditions. We extend a previously proposed parametric blur model and maximum likelihood technique to estimate a more general family of blur functions. The requirements for an appropriate characterization target are also discussed. Experimental results with artificial and real data are presented to validate the proposed approach.


applied imagery pattern recognition workshop | 2001

Model-based face tracking for dense motion field estimation

Timothy F. Gee; Russell M. Mersereau

When estimating the dense motion field of a video sequence, if little is known or assumed about the content, a limited constraint approach such as optical flow must be used. Since optical flow algorithms generally use a small spatial area in the determination of each motion vector the resulting motion field can be noisy, particularly if the input video sequence is noisy. If the moving subject is known to be a face, then we may use that constraint to improve the motion field results. This paper describes a method for deriving dense motion field data using a face tracking approach. A face model is manually initialized to fit a face at the beginning of the input sequence. Then a Kalman filtering approach is used to track the face movements and successively fit the face model to the face in each frame. The 2D displacement vectors are calculated from the projection of the facial model, which is allowed to move in 3D space and may have a 3D shape. We have experimented with planar, cylindrical, and Candide face models. The resulting motion field is used in multiple frame restoration of a face in noisy video.


international symposium on visual computing | 2006

On asymmetric classifier training for detector cascades

Timothy F. Gee

This paper examines the Asymmetric AdaBoost algorithm introduced by Viola and Jones for cascaded face detection. The Viola and Jones face detector uses cascaded classifiers to successively filter, or reject, non-faces. In this approach most non-faces are easily rejected by the earlier classifiers in the cascade, thus reducing the overall number of computations. This requires earlier cascade classifiers to very seldomly reject true instances of faces. To reflect this training goal, Viola and Jones introduce a weighting parameter for AdaBoost iterations and show it enforces a desirable bound. During their implementation, a modification to the proposed weighting was introduced, while enforcing the same bound. The goal of this paper is to examine their asymmetric weighting by putting AdaBoost in the form of Additive Regression as was done by Friedman, Hastie, and Tibshirani. The author believes this helps to explain the approach and adds another connection between AdaBoost and Additive Regression.


international symposium on visual computing | 2006

Segmentation-Based Registration of Organs in Intraoperative Video Sequences

James S. Goddard; Timothy F. Gee; Hengliang Wang; Alexander M. Gorbach

Intraoperative optical imaging of exposed organs in visible, near-infrared, and infrared (IR) wavelengths in the body has the potential to be useful for real-time assessment of organ viability and image guidance during surgical intervention. However, the motion of the internal organs presents significant challenges for fast analysis of recorded 2D video sequences. The movement observed during surgery, due to respiration, cardiac motion, blood flow, and mechanical shift accompanying the surgical intervention, causes organ reflection in the image sequence, making optical measurements for further analysis challenging. Correcting alignment is difficult in that the motion is not uniform over the image. This paper describes a Canny edge-based method for segmentation of the specific organ or region under study, along with a moment-based registration method for the segmented region. Experimental results are provided for a set of intraoperative IR image sequences.


Archive | 2008

Image registration method for medical image sequences

Timothy F. Gee; James S. Goddard

Collaboration


Dive into the Timothy F. Gee's collaboration.

Top Co-Authors

Avatar

James S. Goddard

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Thomas P. Karnowski

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jeffery R. Price

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Lorenzo Fabris

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Mark F. Cunningham

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Klaus-Peter Ziock

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Fred W. DePiero

California Polytechnic State University

View shared research outputs
Top Co-Authors

Avatar

Kenneth W. Tobin

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Donald Eric Hornback

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge