Frederick Wilson Wheeler
General Electric
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Frederick Wilson Wheeler.
electronic imaging | 2005
Frederick Wilson Wheeler; Ralph Thomas Hoctor; Eamon B. Barrett
In this report we propose a frequency domain POCS algorithm for the canonical problem of super-resolution (SR) image synthesis. Unlike previous frequency domain SR algorithms, this approach is structured to accommodate rotations of the source relative to the imaging device, which we believe to help in producing a well-conditioned image synthesis problem. Generally, frequency domain methods have been used when component images were related by subpixel shifts only, because rotations of a sampled image do not correspond to a simple operation in the frequency domain.
international conference on biometrics theory applications and systems | 2008
Frederick Wilson Wheeler; A. G. Amitha Perera; Gil Abramovich; Bing Yu; Peter Henry Tu
The iris is a highly accurate biometric identifier. However widespread adoption is hindered by the difficulty of capturing high-quality iris images with minimal user co-operation. This paper describes a first-generation prototype iris identification system designed for stand-off cooperative access control. This system identifies individuals who stand in front of and face the system after 3.2 seconds on average. Subjects within a capture zone are imaged with a calibrated pair of wide-field-of-view surveillance cameras. A subject is located in three dimensions using face detection and triangulation. A zoomed near infrared iris camera on a pan-tilt platform is then targeted to the subject. The iris camera lens has its focal distance automatically adjusted based on the subject distance. Integrated with the iris camera on the pan-tilt platform is a near infrared illuminator that is composed of an array of directed LEDs. Video frames from the iris camera are processed to detect and segment the iris, generate a template and then identify the subject.
international conference on biometrics theory applications and systems | 2010
Frederick Wilson Wheeler; Richard L. Weiss; Peter Henry Tu
Face recognition at a distance is concerned with the automatic recognition of non-cooperative subjects over a wide area. This remote biométrie collection and identification problem can be addressed with an active vision system where people are detected and tracked with wide-field-of-view cameras and near-field-of-view pan-tilt-zoom cameras are automatically controlled to collect high-resolution facial images. We have developed a prototype active-vision face recognition at a distance system that we call the Biometrie Surveillance System. In this paper we review related prior work, describe the design and operation of this system, and provide experimental performance results. The system features predictive subject targeting and an adaptive target selection mechanism based on the current actions and history of each tracked subject to help ensure that facial images are captured for all subjects in view. Experimental tests designed to simulate operation in large transportation hubs show that the system can track subjects and capture facial images at distances of 25–50 m and can recognize them using a commercial face recognition system at a distance of 15–20 m.
international conference on biometrics theory applications and systems | 2007
Frederick Wilson Wheeler; Xiaoming Liu; Peter Henry Tu
Face recognition at a distance is a challenging and important law-enforcement surveillance problem, with low image resolution and blur contributing to the difficulties. We present a method for combining a sequence of video frames of a subject in order to create a super-resolved image of the face with increased resolution and reduced blur. An Active Appearance Model (AAM) of face shape and appearance is fit to the face in each video frame. The AAM fit provides the registration used by a robust image super-resolution algorithm that iteratively solves for a higher resolution face image from a set of video frames. This process is tested with real-world outdoor video using a PTZ camera and a commercial face recognition engine. Both improved visual perception and automatic face recognition performance are observed in these experiments.
Unattended Ground, Sea, and Air Sensor Technologies and Applications IX | 2007
Peter Henry Tu; Gianfranco Doretto; Nils Krahnstoever; A. G. Amitha Perera; Frederick Wilson Wheeler; Xiaoming Liu; Jens Rittscher; Thomas B. Sebastian; Ting Yu; Kevin George Harding
This paper presents an overview of Intelligent Video work currently under development at the GE Global Research Center and other research institutes. The image formation process is discussed in terms of illumination, methods for automatic camera calibration and lessons learned from machine vision. A variety of approaches for person detection are presented. Crowd segmentation methods enabling the tracking of individuals through dense environments such as retail and mass transit sites are discussed. It is shown how signature generation based on gross appearance can be used to reacquire targets as they leave and enter disjoint fields of view. Camera calibration information is used to further constrain the detection of people and to synthesize a top-view, which fuses all camera views into a composite representation. It is shown how site-wide tracking can be performed in this unified framework. Human faces are an important feature as both a biometric identifier and as a method for determining the focus of attention via head pose estimation. It is shown how automatic pan-tilt- zoom control; active shape/appearance models and super-resolution methods can be used to enhance the face capture and analysis problem. A discussion of additional features that can be used for inferring intent is given. These include body-part motion cues and physiological phenomena such as thermal images of the face.
british machine vision conference | 2006
Xiaoming Liu; Peter Henry Tu; Frederick Wilson Wheeler
Active Appearance Models (AAMs) represent the shape and appearance of an object via two low-dimensional subspaces, one for shape and one for appearance. AAMs for facial images are currently receiving considerable attention from the vision community. However, most existing work focuses on fitting AAMs to high-quality facial images. For many applications, effectively fitting an AAM to low-resolution facial images is of critical importance. This paper addresses this challenge from two aspects. On the modeling side, we propose an iterative AAM enhancement scheme, which not only results in increased fitting speed, but also improves the fitting robustness. For fitting AAMs to low-resolution images, we build a multi-resolution AAM and show that the best fitting performance is obtained when the model resolution is slightly higher than the facial image resolution. Experimental results using both indoor video and outdoor surveillance video are presented.
computer vision and pattern recognition | 2009
Yan Tong; Xiaoming Liu; Frederick Wilson Wheeler; Peter Henry Tu
Landmark labeling of training images is essential for many learning tasks in computer vision, such as object detection, tracking, and alignment. Image labeling is typically conducted manually, which is both labor-intensive and error-prone. To improve this process, this paper proposes a new approach to estimate a set of landmarks for a large image ensemble with only a small number of manually labeled images from the ensemble. Our approach, named semi-supervised least-squares congealing, aims to minimize an objective function defined on both labeled and unlabeled images. A shape model is learnt on-line to constrain the landmark configuration. We also employ a partitioning strategy to allow coarse-to-fine landmark estimation. Extensive experiments on facial images show that our approach can reliably and accurately label landmarks for a large image ensemble starting from a small number of manually labeled images, under various challenging scenarios.
Medical Imaging 2006: Image Processing | 2006
Frederick Wilson Wheeler; A. G. Amitha Perera; Bernhard Erich Hermann Claus; Serge Muller; Gero Peters; John P. Kaufhold
A novel technique for the detection and enhancement of microcalcifications in digital tomosynthesis mammography (DTM) is presented. In this method, the DTM projection images are used directly, instead of using a 3D reconstruction. Calcification residual images are computed for each of the projection images. Calcification detection is then performed over 3D space, based on the values of the calcification residual images at projection points for each 3D point under test. The quantum, electronic, and tissue noise variance at each pixel in each of the calcification residuals is incorporated into the detection algorithm. The 3D calcification detection algorithm finds a minimum variance estimate of calcification attenuation present in 3D space based on the signal and variance of the calcification residual images at the corresponding points in the projection images. The method effectively detects calcifications in 3D in a way that both ameliorates the difficulties of joint tissue/microcalcification tomosynthetic reconstruction (streak artifacts, etc.) and exploits the well understood image properties of microcalcifications as they appear in 2D mammograms. In this method, 3D reconstruction and calcification detection and enhancement are effectively combined to create a calcification detection specific reconstruction. Motivation and details of the technique and statistical results for DTM data are provided.
international conference on computer vision | 2009
Xiaoming Liu; Yan Tong; Frederick Wilson Wheeler
Joint alignment for an image ensemble can rectify images in the spatial domain such that the aligned images are as similar to each other as possible. This important technology has been applied to various object classes and medical applications. However, previous approaches to joint alignment work on an ensemble of a single object class. Given an ensemble with multiple object classes, we propose an approach to automatically and simultaneously solve two problems, image alignment and clustering. Both the alignment parameters and clustering parameters are formulated into a unified objective function, whose optimization leads to an unsupervised joint estimation approach. It is further extended to semi-supervised simultaneous estimation where a few labeled images are provided. Extensive experiments on diverse real-world databases demonstrate the capabilities of our work on this challenging problem.
computer vision and pattern recognition | 2009
Necmiye Ozay; Yan Tong; Frederick Wilson Wheeler; Xiaoming Liu
This paper addresses the problem of developing facial image quality metrics that are predictive of the performance of existing biometric matching algorithms and incorporating the quality estimates into the recognition decision process to improve overall performance. The first task we consider is the separation of probe/gallery qualities since the match score depends on both. Given a set of training images of the same individual, we find the match scores between all possible probe/gallery image pairs. Then, we define symmetric normalized match score for any pair, model it as the average of the qualities of probe/gallery corrupted by additive noise, and estimate the quality values such that the noise is minimized. To utilize quality in the decision process, we employ a Bayesian network to model the relationships among qualities, predefined quality related image features and recognition. The recognition decision is made by probabilistic inference via this model. We illustrate with various face verification experiments that incorporating quality into the decision process can improve the performance significantly.