Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan Warrell is active.

Publication


Featured researches published by Jonathan Warrell.


international conference on computer vision | 2013

Efficient Salient Region Detection with Soft Image Abstraction

Ming-Ming Cheng; Jonathan Warrell; Wen-Yan Lin; Shuai Zheng; Vibhav Vineet; Nigel Crook

Detecting visually salient regions in images is one of the fundamental problems in computer vision. We propose a novel method to decompose an image into large scale perceptually homogeneous elements for efficient salient region detection, using a soft image abstraction representation. By considering both appearance similarity and spatial distribution of image pixels, the proposed representation abstracts out unnecessary image details, allowing the assignment of comparable saliency values across similar regions, and producing perceptually accurate salient region detection. We evaluate our salient region detection approach on the largest publicly available dataset with pixel accurate annotations. The experimental results show that the proposed method outperforms 18 alternate methods, reducing the mean absolute error by 25.2% compared to the previous best result, while being computationally more efficient.


computer vision and pattern recognition | 2011

Proposal generation for object detection using cascaded ranking SVMs

Ziming Zhang; Jonathan Warrell; Philip H. S. Torr

Object recognition has made great strides recently. However, the best methods, such as those based on kernel-SVMs are highly computationally intensive. The problem of how to accelerate the evaluation process without decreasing accuracy is thus of current interest. In this paper, we deal with this problem by using the idea of ranking. We propose a cascaded architecture which using the ranking SVM generates an ordered set of proposals for windows containing object instances. The top ranking windows may then be fed to a more complex detector. Our experiments demonstrate that our approach is robust, achieving higher overlap-recall values using fewer output proposals than the state-of-the-art. Our use of simple gradient features and linear convolution indicates that our method is also faster than the state-of-the-art.


computer vision and pattern recognition | 2013

Mesh Based Semantic Modelling for Indoor and Outdoor Scenes

Julien P. C. Valentin; Sunando Sengupta; Jonathan Warrell; Ali Shahrokni; Philip H. S. Torr

Semantic reconstruction of a scene is important for a variety of applications such as 3D modelling, object recognition and autonomous robotic navigation. However, most object labelling methods work in the image domain and fail to capture the information present in 3D space. In this work we propose a principled way to generate object labelling in 3D. Our method builds a triangulated meshed representation of the scene from multiple depth estimates. We then define a CRF over this mesh, which is able to capture the consistency of geometric properties of the objects present in the scene. In this framework, we are able to generate object hypotheses by combining information from multiple sources: geometric properties (from the 3D mesh), and appearance properties (from images). We demonstrate the robustness of our framework in both indoor and outdoor scenes. For indoor scenes we created an augmented version of the NYU indoor scene dataset (RGBD images) with object labelled meshes for training and evaluation. For outdoor scenes, we created ground truth object labellings for the KITTY odometry dataset (stereo image sequence). We observe a significant speed-up in the inference stage by performing labelling on the mesh, and additionally achieve higher accuracies.


computer vision and pattern recognition | 2010

“Lattice Cut” - Constructing superpixels using layer constraints

Alastair Philip Moore; Simon J. D. Prince; Jonathan Warrell

Unsupervised over-segmentation of an image into super-pixels is a common preprocessing step for image parsing algorithms. Superpixels are used as both regions of support for feature vectors and as a starting point for the final segmentation. Recent algorithms that construct superpixels that conform to a regular grid (or superpixel lattice) have used greedy solutions. In this paper we show that we can construct a globally optimal solution in either the horizontal or vertical direction using a single graph cut. The solution takes into account both edges in the image, and the coherence of the resulting superpixel regions. We show that our method outperforms existing algorithms for computing superpixel lattices. Additionally, we show that performance can be comparable or better than other contemporary segmentation algorithms which are not constrained to produce a lattice.


european conference on computer vision | 2012

Filter-Based mean-field inference for random fields with higher-order terms and product label-spaces

Vibhav Vineet; Jonathan Warrell; Philip H. S. Torr

Recently, a number of cross bilateral filtering methods have been proposed for solving multi-label problems in computer vision, such as stereo, optical flow and object class segmentation that show an order of magnitude improvement in speed over previous methods. These methods have achieved good results despite using models with only unary and/or pairwise terms. However, previous work has shown the value of using models with higher-order terms e.g. to represent label consistency over large regions, or global co-occurrence relations. We show how these higher-order terms can be formulated such that filter-based inference remains possible. We demonstrate our techniques on joint stereo and object labeling problems, as well as object class segmentation, showing in addition for joint object-stereo labeling how our method provides an efficient approach to inference in product label-spaces. We show that we are able to speed up inference in these models around 10-30 times with respect to competing graph-cut/move-making methods, as well as maintaining or improving accuracy in all cases. We show results on PascalVOC-10 for object class segmentation, and Leuven for joint object-stereo labeling.


international conference on computer vision | 2009

Patch-based within-object classification

Jania Aghajanian; Jonathan Warrell; Simon J. D. Prince; Peng Li; Jennifer Rohn; Buzz Baum

Advances in object detection have made it possible to collect large databases of certain objects. In this paper we exploit these datasets for within-object classification. For example, we classify gender in face images, pose in pedestrian images and phenotype in cell images. Previous work has mainly targeted the above tasks individually using object specific representations. Here, we propose a general Bayesian framework for within-object classification. Images are represented as a regular grid of non-overlapping patches. In training, these patches are approximated by a predefined library. In inference, the choice of approximating patch determines the classification decision. We propose a Bayesian framework in which we marginalize over the patch frequency parameters to provide a posterior probability for the class. We test our algorithm on several challenging “real world” databases.


International Journal of Computer Vision | 2014

Filter-Based Mean-Field Inference for Random Fields with Higher-Order Terms and Product Label-Spaces

Vibhav Vineet; Jonathan Warrell; Philip H. S. Torr

Recently, a number of cross bilateral filtering methods have been proposed for solving multi-label problems in computer vision, such as stereo, optical flow and object class segmentation that show an order of magnitude improvement in speed over previous methods. These methods have achieved good results despite using models with only unary and/or pairwise terms. However, previous work has shown the value of using models with higher-order terms e.g. to represent label consistency over large regions, or global co-occurrence relations. We show how these higher-order terms can be formulated such that filter-based inference remains possible. We demonstrate our techniques on joint stereo and object labelling problems, as well as object class segmentation, showing in addition for joint object-stereo labelling how our method provides an efficient approach to inference in product label-spaces. We show that we are able to speed up inference in these models around 10–30 times with respect to competing graph-cut/move-making methods, as well as maintaining or improving accuracy in all cases. We show results on PascalVOC-10 for object class segmentation, and Leuven for joint object-stereo labelling.


british machine vision conference | 2011

Human Instance Segmentation from Video using Detector-based Conditional Random Fields

Vibhav Vineet; Jonathan Warrell; Lubor Ladicky; Philip H. S. Torr

In this work, we propose a method for instance based human segmentation in images and videos, extending the recent detector-based conditional random field model of Ladicky et.al. Instance based human segmentation involves pixel level labeling of an image, partitioning it into distinct human instances and background. To achieve our goal, we add three new components to their framework. First, we include human partsbased detection potentials to take advantage of the structure present in human instances. Further, in order to generate a consistent segmentation from different human parts, we incorporate shape prior information, which biases the segmentation to characteristic overall human shapes. Also, we enhance the representative power of the energy function by adopting exemplar instance based matching terms, which helps our method to adapt easily to different human sizes and poses. Finally, we extensively evaluate our proposed method on the Buffy dataset with our new segmented ground truth images, and show a substantial improvement over existing CRF methods. These new annotations will be made available for future use as well.


international conference on computer vision | 2009

Scene shape priors for superpixel segmentation

Alastair Philip Moore; Simon J. D. Prince; Jonathan Warrell; Umar Mohammed; Graham Jones

Unsupervised over-segmentation of an image into super-pixels is a common preprocessing step for image parsing algorithms. Superpixels are used as both regions of support for feature vectors and as a starting point for the final segmentation. In this paper we investigate incorporating a priori information into superpixel segmentations. We learn a probabilistic model that describes the spatial density of the object boundaries in the image. We then describe an over-segmentation algorithm that partitions this density roughly equally between superpixels whilst still attempting to capture local object boundaries. We demonstrate this approach using road scenes where objects in the center of the image tend to be more distant and smaller than those at the edge. We show that our algorithm successfully learns this foveated spatial distribution and can exploit this knowledge to improve the segmentation. Lastly, we introduce a new metric for evaluating vision labeling problems. We measure performance on a challenging real-world dataset and illustrate the limitations of conventional evaluation metrics.


international conference on pattern recognition | 2010

CUDA Implementation of Deformable Pattern Recognition and its Application to MNIST Handwritten Digit Database

Yoshiki Mizukami; Katsumi Tadamura; Jonathan Warrell; Peng Li; Simon J. D. Prince

In this study we propose a deformable pattern recognition method with CUDA implementation. In order to achieve the proper correspondence between foreground pixels of input and prototype images, a pair of distance maps are generated from input and prototype images, whose pixel values are given based on the distance to the nearest foreground pixel. Then a regularization technique computes the horizontal and vertical displacements based on these distance maps. The dissimilarity is measured based on the eight-directional derivative of input and prototype images in order to leverage characteristic information on the curvature of line segments that might be lost after the deformation. The prototype-parallel displacement computation on CUDA and the gradual prototype elimination technique are employed for reducing the computational time without sacrificing the accuracy. A simulation shows that the proposed method with the k-nearest neighbor classifier gives the error rate of 0.57% for the MNIST handwritten digit database.

Collaboration


Dive into the Jonathan Warrell's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vibhav Vineet

Oxford Brookes University

View shared research outputs
Top Co-Authors

Avatar

Natasha Govender

Council of Scientific and Industrial Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fred Nicolls

University of Cape Town

View shared research outputs
Top Co-Authors

Avatar

Peng Li

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Mogomotsi Keaikitse

Council of Scientific and Industrial Research

View shared research outputs
Top Co-Authors

Avatar

Glenn Sheasby

Oxford Brookes University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge