Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simon J. D. Prince is active.

Publication


Featured researches published by Simon J. D. Prince.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Probabilistic Models for Inference about Identity

Peng Li; Yun Fu; Umar Mohammed; James H. Elder; Simon J. D. Prince

Many face recognition algorithms use “distance-based” methods: Feature vectors are extracted from each face and distances in feature space are compared to determine matches. In this paper, we argue for a fundamentally different approach. We consider each image as having been generated from several underlying causes, some of which are due to identity (latent identity variables, or LIVs) and some of which are not. In recognition, we evaluate the probability that two faces have the same underlying identity cause. We make these ideas concrete by developing a series of novel generative models which incorporate both within-individual and between-individual variation. We consider both the linear case, where signal and noise are represented by a subspace, and the nonlinear case, where an arbitrary face manifold can be described and noise is position-dependent. We also develop a “tied” version of the algorithm that allows explicit comparison of faces across quite different viewing conditions. We demonstrate that our model produces results that are comparable to or better than the state of the art for both frontal face recognition and face recognition under varying pose.


international conference on computer graphics and interactive techniques | 2009

Visio-lization: generating novel facial images

Umar Mohammed; Simon J. D. Prince; Jan Kautz

Our goal is to generate novel realistic images of faces using a model trained from real examples. This model consists of two components: First we consider face images as samples from a texture with spatially varying statistics and describe this texture with a local non-parametric model. Second, we learn a parametric global model of all of the pixel values. To generate realistic faces, we combine the strengths of both approaches and condition the local non-parametric model on the global parametric model. We demonstrate that with appropriate choice of local and global models it is possible to reliably generate new realistic face images that do not correspond to any individual in the training data. We extend the model to cope with considerable intra-class variation (pose and illumination). Finally, we apply our model to editing real facial images: we demonstrate image in-painting, interactive techniques for improving synthesized images and modifying facial expressions.


computer vision and pattern recognition | 2010

“Lattice Cut” - Constructing superpixels using layer constraints

Alastair Philip Moore; Simon J. D. Prince; Jonathan Warrell

Unsupervised over-segmentation of an image into super-pixels is a common preprocessing step for image parsing algorithms. Superpixels are used as both regions of support for feature vectors and as a starting point for the final segmentation. Recent algorithms that construct superpixels that conform to a regular grid (or superpixel lattice) have used greedy solutions. In this paper we show that we can construct a globally optimal solution in either the horizontal or vertical direction using a single graph cut. The solution takes into account both edges in the image, and the coherence of the resulting superpixel regions. We show that our method outperforms existing algorithms for computing superpixel lattices. Additionally, we show that performance can be comparable or better than other contemporary segmentation algorithms which are not constrained to produce a lattice.


british machine vision conference | 2009

Face Pose Estimation in Uncontrolled Environments

Jania Aghajanian; Simon J. D. Prince

Automatic estimation of head pose from a face image is a sub-problem of human face analysis with widespread applications such as gaze direction detection and human computer interaction. Most current methods estimate pose in a limited range or treat pose as a classification problem by assigning the face to one of many discrete poses. Moreover they have mainly been tested on images taken in controlled environments. We address the problem of estimating pose as a continuous regression problem on “real world” images with large variations in background, illumination and expression. We propose a probabilistic framework with a general representation that does not rely on locating facial features. Instead we represent a face with a non-overlapping grid of patches. This representation is used in a generative model for automatic estimation of head pose ranging from 90 to 90 in images taken in uncontrolled environments. Our methods achieve a correlation of 0.88 with the human estimates of pose.


International Journal of Computer Vision | 2007

Pre-Attentive and Attentive Detection of Humans in Wide-Field Scenes

James H. Elder; Simon J. D. Prince; Yuqian Hou; Mikhail Sizintsev; E. Olevskiy

We address the problem of localizing and obtaining high-resolution footage of the people present in a scene. We propose a biologically-inspired solution combining pre-attentive, low-resolution sensing for detection with shiftable, high-resolution, attentive sensing for confirmation and further analysis.The detection problem is made difficult by the unconstrained nature of realistic environments and human behaviour, and the low resolution of pre-attentive sensing. Analysis of human peripheral vision suggests a solution based on integration of relatively simple but complementary cues. We develop a Bayesian approach involving layered probabilistic modeling and spatial integration using a flexible norm that maximizes the statistical power of both dense and sparse cues. We compare the statistical power of several cues and demonstrate the advantage of cue integration. We evaluate the Bayesian cue integration method for human detection on a labelled surveillance database and find that it outperforms several competing methods based on conjunctive combinations of classifiers (e.g., Adaboost). We have developed a real-time version of our pre-attentive human activity sensor that generates saccadic targets for an attentive foveated vision system. Output from high-resolution attentive detection algorithms and gaze state parameters are fed back as statistical priors and combined with pre-attentive cues to determine saccadic behaviour. The result is a closed-loop system that fixates faces over a 130 deg field of view, allowing high-resolution capture of facial video over a large dynamic scene.


international conference on computer vision | 2009

Patch-based within-object classification

Jania Aghajanian; Jonathan Warrell; Simon J. D. Prince; Peng Li; Jennifer Rohn; Buzz Baum

Advances in object detection have made it possible to collect large databases of certain objects. In this paper we exploit these datasets for within-object classification. For example, we classify gender in face images, pose in pedestrian images and phenotype in cell images. Previous work has mainly targeted the above tasks individually using object specific representations. Here, we propose a general Bayesian framework for within-object classification. Images are represented as a regular grid of non-overlapping patches. In training, these patches are approximated by a predefined library. In inference, the choice of approximating patch determines the classification decision. We propose a Bayesian framework in which we marginalize over the patch frequency parameters to provide a posterior probability for the class. We test our algorithm on several challenging “real world” databases.


british machine vision conference | 2006

Tied factor analysis for face recognition across large pose changes

Simon J. D. Prince; James H. Elder

Face recognition algorithms perform very unreliably when the pose of the probe face is different from the stored face: typical feature vectors vary more with pose than with identity. We propose a generative model that creates a one-to-many mapping from an idealized “identity” space to the observed data space. In this identity space, the representation for each individual does not vary with pose. The measured feature vector is generated by a posecontingent linear transformation of the identity vector in the presence of noise. We term this model “tied” factor analysis. The choice of linear transformation (factors) depends on the pose, but the loadings are constant (tied) for a given individual. Our algorithm estimates the linear transformations and the noise parameters using training data. We propose a probabilistic distance metric which allows a full posterior over possible matches to be established. We introduce a novel feature extraction process and investigate recognition performance using the FERET database. Recognition performance is shown to be significantly better than contemporary approaches.


computer vision and pattern recognition | 2009

Joint and implicit registration for face recognition

Peng Li; Simon J. D. Prince

Contemporary face recognition algorithms rely on precise localization of keypoints (corner of eye, nose etc.). Unfortunately, finding keypoints reliably and accurately remains a hard problem. In this paper we pose two questions. First, is it possible to exploit the gallery image in order to find keypoints in the probe image? For instance, consider finding the left eye in the probe image. Rather than using a generic eye model, we use a model that is informed by the appearance of the eye in the gallery image. To this end we develop a probabilistic model which combines recognition and keypoint localization. Second, is it necessary to localize keypoints? Alternatively we can consider keypoint position as a hidden variable which we marginalize over in a Bayesian manner. We demonstrate that both of these innovations improve performance relative to conventional methods in both frontal and cross-pose face recognition.


international conference on computer vision | 2009

Scene shape priors for superpixel segmentation

Alastair Philip Moore; Simon J. D. Prince; Jonathan Warrell; Umar Mohammed; Graham Jones

Unsupervised over-segmentation of an image into super-pixels is a common preprocessing step for image parsing algorithms. Superpixels are used as both regions of support for feature vectors and as a starting point for the final segmentation. In this paper we investigate incorporating a priori information into superpixel segmentations. We learn a probabilistic model that describes the spatial density of the object boundaries in the image. We then describe an over-segmentation algorithm that partitions this density roughly equally between superpixels whilst still attempting to capture local object boundaries. We demonstrate this approach using road scenes where objects in the center of the image tend to be more distant and smaller than those at the edge. We show that our algorithm successfully learns this foveated spatial distribution and can exploit this knowledge to improve the segmentation. Lastly, we introduce a new metric for evaluating vision labeling problems. We measure performance on a challenging real-world dataset and illustrate the limitations of conventional evaluation metrics.


international conference on pattern recognition | 2010

CUDA Implementation of Deformable Pattern Recognition and its Application to MNIST Handwritten Digit Database

Yoshiki Mizukami; Katsumi Tadamura; Jonathan Warrell; Peng Li; Simon J. D. Prince

In this study we propose a deformable pattern recognition method with CUDA implementation. In order to achieve the proper correspondence between foreground pixels of input and prototype images, a pair of distance maps are generated from input and prototype images, whose pixel values are given based on the distance to the nearest foreground pixel. Then a regularization technique computes the horizontal and vertical displacements based on these distance maps. The dissimilarity is measured based on the eight-directional derivative of input and prototype images in order to leverage characteristic information on the curvature of line segments that might be lost after the deformation. The prototype-parallel displacement computation on CUDA and the gradual prototype elimination technique are employed for reducing the computational time without sacrificing the accuracy. A simulation shows that the proposed method with the k-nearest neighbor classifier gives the error rate of 0.57% for the MNIST handwritten digit database.

Collaboration


Dive into the Simon J. D. Prince's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peng Li

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Umar Mohammed

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Kautz

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yun Fu

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge