Nick Pears
University of York
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nick Pears.
computer vision and pattern recognition | 2007
Hongying Meng; Nick Pears; Chris Bailey
In this paper, we propose a human action recognition system suitable for embedded computer vision applications in security systems, human-computer interaction and intelligent environments. Our system is suitable for embedded computer vision application based on three reasons. Firstly, the system was based on a linear support vector machine (SVM) classifier where classification progress can be implemented easily and quickly in embedded hardware. Secondly, we use compacted motion features easily obtained from videos. We address the limitations of the well known motion history image (MHI) and propose a new hierarchical motion history histogram (HMHH) feature to represent the motion information. HMHH not only provides rich motion information, but also remains computationally inexpensive. Finally, we combine MHI and HMHH together and extract a low dimension feature vector to be used in the SVM classifiers. Experimental results show that our system achieves significant improvement on the recognition performance.
International Journal of Computer Vision | 2013
Clement Creusot; Nick Pears; Jim Austin
We address the problem of automatically detecting a sparse set of 3D mesh vertices, likely to be good candidates for determining correspondences, even on soft organic objects. We focus on 3D face scans, on which single local shape descriptor responses are known to be weak, sparse or noisy. Our machine-learning approach consists of computing feature vectors containing
international conference on image processing | 2004
Thomas Heseltine; Nick Pears; Jim Austin
advanced video and signal based surveillance | 2011
Greg Pearce; Nick Pears
D
intelligent robots and systems | 2001
Nick Pears; Bojian Liang
international conference on robotics and automation | 2002
Bojian Liang; Nick Pears
different local surface descriptors. These vectors are normalized with respect to the learned distribution of those descriptors for some given target shape (landmark) of interest. Then, an optimal function of this vector is extracted that best separates this particular target shape from its surrounding region within the set of training data. We investigate two alternatives for this optimal function: a linear method, namely Linear Discriminant Analysis, and a non-linear method, namely AdaBoost. We evaluate our approach by landmarking 3D face scans in the FRGC v2 and Bosphorus 3D face datasets. Our system achieves state-of-the-art performance while being highly generic.
IEEE Pervasive Computing | 2009
Nick Pears; Daniel Jackson; Patrick Olivier
We evaluate a new approach to face recognition using a variety of surface representations of three-dimensional facial structure. Applying principal component analysis (PCA), we show that high levels of recognition accuracy can be achieved on a large database of 3D face models, captured under conditions that present typical difficulties to more conventional two-dimensional approaches. Applying a range of image processing techniques we identify the most effective surface representation for use in such application areas as security, surveillance, data compression and archive searching.
International Journal of Computer Vision | 2010
Nick Pears; Thomas Heseltine; Marcelo Romero
We investigate a range of solutions in car ‘make and model’ recognition. Several different feature detection approaches are investigated and applied to the problem including a new approach based on Harris corner strengths. This approach recursively partitions the image into quadrants, the feature strengths in these quadrants are then summed and locally normalised in a recursive, hierarchical fashion. Two different classification approaches are investigated; a k-nearest-neighbour classifier and a Naive Bayes classifier. Our system is able to classify vehicles with 96.0% accuracy, tested using leave-one-out cross-validation on a realistic dataset of 262 frontal images of cars.
Archive | 2012
Nick Pears; Yonghuai Liu; Peter Bunting
We describe a method of mobile robot monocular visual navigation, which uses multiple visual cues to detect and segment the ground plane in the robots field of view. Corner points are tracked through an image sequence and grouped into coplanar regions using a method which we call an H-based tracker. The H-based tracker employs planar homographies and is initialised by 5-point planar projective invariants. This allows us to detect ground plane patches and the colour within such patches is subsequently modelled. These patches are grown by colour classification to give a ground plane segmentation, which is then used as an input to a new variant of the artificial potential field algorithm.
international conference on image analysis and recognition | 2004
Thomas Heseltine; Nick Pears; Jim Austin
We introduce three new results, which allow homographies of the ground plane to support visual navigation functions for mobile robots using uncalibrated cameras. Firstly, we illustrate how, for pure translation, a homography can be computed from just two pairs of corresponding corner features. Secondly, we show how, for pure translation, we can determine the height of corner features above the ground plane using the recovered homography and a construct based on the cross ratio. This allows us to detect points which can be driven over, as their height is measured to be close to zero, and points which are sufficiently high to drive under. Finally, we show how, in the case of general planar motion, homographies can be used to determine the rotation of the camera and robot.