Raymond S. Smith
University of Surrey
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Raymond S. Smith.
Proceedings Computer Animation 1999 | 1999
Adrian Hilton; Daniel J. Beresford; Thomas Gentils; Raymond S. Smith; Wei Sun
A new technique is introduced for automatically building recognisable moving 3D models of individual people. Realistic modelling of people is essential for advanced multimedia, augmented reality and immersive virtual reality. Current systems for whole-body model capture are based on active 3D sensing to measure the shape of the body surface. Such systems are prohibitively expensive and do not enable capture of high-quality photo-realistic colour. This results in geometrically accurate but unrealistic human models. The goal of this research is to achieve automatic low cost modelling of people suitable for personalised avatars to populate virtual worlds. A model based approach is presented for automatic reconstruction of recognisable avatars from a set of low cost colour images of a person taken from four orthogonal views. A generic 3D human model represents both the human shape and kinematic joint structure. The shape of a specific person is captured by mapping 2D silhouette information from the orthogonal view colour images onto the generic 3D model. Colour texture mapping is achieved by projecting the set of images onto the deformed 3D model. This results in the capture of a recognisable 3D facsimile of an individual person suitable for articulated movement in a virtual world. The system is low cost, requires single shot capture, is reliable for large variations in shape and size and can cope with clothing of moderate complexity.
The Visual Computer | 2000
Adrian Hilton; Daniel J. Beresford; Thomas Gentils; Raymond S. Smith; Wei Sun; John Illingworth
In this paper a new technique is introduced for automatically building recognisable, moving 3D models of individual people. A set of multiview colour images of a person is captured from the front, sides and back by one or more cameras. Model-based reconstruction of shape from silhouettes is used to transform a standard 3D generic humanoid model to approximate a persons shape and anatomical structure. Realistic appearance is achieved by colour texture mapping from the multiview images. The results show the reconstruction of a realistic 3D facsimile of the person suitable for animation in a virtual world. The system is inexpensive and is reliable for large variations in shape, size and clothing. This is the first approach to achieve realistic model capture for clothed people and automatic reconstruction of animated models. A commercial system based on this approach has recently been used to capture thousands of models of the general public.
IEEE Transactions on Neural Networks | 2011
Terry Windeatt; Rakkrit Duangsoithong; Raymond S. Smith
A feature ranking scheme for multilayer perceptron (MLP) ensembles is proposed, along with a stopping criterion based upon the out-of-bootstrap estimate. To solve multi-class problems feature ranking is combined with modified error-correcting output coding. Experimental results on benchmark data demonstrate the versatility of the MLP base classifier in removing irrelevant features.
machine vision applications | 2003
Jonathan Starck; Gordon Collins; Raymond S. Smith; Adrian Hilton; John Illingworth
Abstract.In this paper we present a layered framework for the animation of high-resolution human geometry captured using active 3D sensing technology. Commercial scanning systems can now acquire highly accurate surface data across the whole-body. However, the result is a dense, irregular, surface mesh without any structure for animation. We introduce a model-based approach to animating a scanned data-set by matching a generic humanoid control model to the surface data. A set of manually defined feature points are used to define body and facial pose, and a novel shape constrained matching algorithm is presented to deform the control model to match the scanned shape. This model-based approach allows the detailed specification of surface animation to be defined once for the generic model and re-applied to any captured scan. The detail of the high-resolution geometry is represented as a displacement map on the surface of the control model, providing smooth reconstruction of detailed shape on the animated control surface. The generic model provides animation control over the scan data-set, and the displacement map provides control of the high-resolution surface for editing geometry or level of detail in reconstruction or compression.
iberoamerican congress on pattern recognition | 2007
Norman Poh; Josef Kittler; Raymond S. Smith; J. Rafael Tena
Underlying biometrics are biological tissues that evolve over time. Hence, biometric authentication (and recognition in general) is a dynamic pattern recognition problem. We propose a novel method to track this change for each user, as well as over the whole population of users, given only the system match scores. Estimating this change is challenging because of the paucity of the data, especially the genuine user scores. We overcome this problem by imposing the constraints that the user-specific class-conditional scores take on a particular distribution (Gaussian in our case) and that it is continuous in time. As a result, we can estimate the performance to an arbitrary time precision. Our method compares favorably with the conventional empirically based approach which utilizes a sliding window, and as a result suffers from the dilemma between precision in performance and the time resolution, i.e., higher performance precision entails lower time resolution and vice-versa. Our findings applied to 3D face verification suggest that the overall system performance, i.e., over the whole population of observed users, improves with use initially but then gradually degrades over time. However, the performance of individual users varies dramatically. Indeed, a minority of users actually improve in performance over time. While performance trend is dependent on both the template and the person, our findings on 3D face verification suggest that the person dependency is a much stronger component. This suggests that strategies to reduce performance degradation, e.g., updating a biometric template/model, should be person-dependent.
advanced video and signal based surveillance | 2007
Jose Rafael Tena; Raymond S. Smith; Miroslav Hamouz; Josef Kittler; Adrian Hilton; John Illingworth
The ever growing need for improved security, surveillance and identity protection, calls for the creation of evermore reliable and robust face recognition technology that is scalable and can be deployed in all kinds of environments without compromising its effectiveness. In this paper we study the impact that pose correction has on the performance of 2D face recognition. To measure the effect, we use a state of the art 2D recognition algorithm. The pose correction is performed by means of 3D morphable model. Our results on the non frontal XM2VTS database showed that pose correction can improve recognition rates up to 30%.
international conference on multiple classifier systems | 2010
Raymond S. Smith; Terry Windeatt
A method for applying weighted decoding to error-correcting output code ensembles of binary classifiers is presented. This method is sensitive to the target class in that a separate weight is computed for each base classifier and target class combination. Experiments on 11 UCI datasets show that the method tends to improve classification accuracy when using neural network or support vector machine base classifiers. It is further shown that weighted decoding combines well with the technique of bootstrapping to improve classification accuracy still further.
Archive | 1999
Wei Sun; Adrian Hilton; Raymond S. Smith; John Illingworth
This paper proposes a technique for building layer animation models of real objects from 3D surface measurement data. A layered animation model is constructed with 3 layers, the skeleton, low-resolution-control-model and high-resolution model. Initially a skeleton model is manually placed inside the low-resolution control model and high-resolution scanned data. Automatic techniques are introduced to map both the control model and captured data into a single layered model. The resulting model enables efficient, seamless animation by manipulation of the skeleton whilst maintaining the captured high-resolution surface detail.
international conference on multiple classifier systems | 2005
Raymond S. Smith; Terry Windeatt
The ECOC technique for solving multi-class pattern recognition problems can be broken down into two distinct stages – encoding and decoding. Given a pattern vector of unknown class, the encoding stage consists in constructing a corresponding output code vector by applying to it each of the base classifiers in the ensemble. The decoding stage consists in making a classification decision based on the value of the output code. This paper focuses on the latter stage. Firstly, three different approaches to decoding rule design are reviewed and a new algorithm is presented. This new algorithm is then compared experimentally with two common decoding rules and evidence is presented that the new rule has some advantages in the form of slightly improved classification accuracy and reduced sensitivity to optimal training.
Neurocomputing | 2015
Raymond S. Smith; Terry Windeatt
Within the context of facial expression classification using the facial action coding system (FACS), we address the problem of detecting facial action units (AUs). Feature extraction is performed by generating a large number of multi-resolution local binary pattern (MLBP) features and then selecting from these using fast correlation-based filtering (FCBF). The need for a classifier per AU is avoided by training a single error-correcting output code (ECOC) multi-class classifier to generate occurrence scores for each of several AU groups. A novel weighted decoding scheme is proposed with the weights computed using first order Walsh coefficients. Platt scaling is used to calibrate the ECOC scores to probabilities and appropriate sums are taken to obtain separate probability estimates for each AU individually. The bias and variance properties of the classifier are measured and we show that both these sources of error can be reduced by enhancing ECOC through bootstrapping and weighted decoding.