Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Toby P. Breckon is active.

Publication


Featured researches published by Toby P. Breckon.


british machine vision conference | 2010

Object Recognition using 3D SIFT in Complex CT Volumes.

Gregory T. Flitton; Toby P. Breckon; Najla Megherbi Bouallagu

The automatic detection of objects within complex volumetric imagery is becoming of increased interest due to the use of dual energy Computed Tomography (CT) scanners as an aviation security deterrent. These devices produce a volumetric image akin to that encountered in prior medical CT work but in this case we are dealing with a complex multi-object volumetric environment including significant noise artefacts. In this work we look at the application of the recent extension to the seminal SIFT approach to the 3D volumetric recognition of rigid objects within this complex volumetric environment. A detailed overview of the approach and results when applied to a set of exemplar CT volumetric imagery is presented.


Proceedings of SPIE | 2011

Real-time people and vehicle detection from UAV imagery

Anna Gaszczak; Toby P. Breckon; Jiwan Han

A generic and robust approach for the real-time detection of people and vehicles from an Unmanned Aerial Vehicle (UAV) is an important goal within the framework of fully autonomous UAV deployment for aerial reconnaissance and surveillance. Here we present an approach for the automatic detection of vehicles based on using multiple trained cascaded Haar classifiers with secondary confirmation in thermal imagery. Additionally we present a related approach for people detection in thermal imagery based on a similar cascaded classification technique combining additional multivariate Gaussian shape matching. The results presented show the successful detection of vehicle and people under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. Performance of the detector is optimized to reduce the overall false positive rate by aiming at the detection of each object of interest (vehicle/person) at least once in the environment (i.e. per search patter flight path) rather than every object in each image frame. Currently the detection rate for people is ~70% and cars ~80% although the overall episodic object detection rate for each flight pattern exceeds 90%.


Journal of Electronic Imaging | 2013

Dictionary of Computer Vision and Image Processing

Robert B. Fisher; Toby P. Breckon; Kenneth M. Dawson-Howe; Andrew W. Fitzgibbon; Craig Robertson; Emanuele Trucco; Christopher K. I. Williams

Written by leading researchers, the 2nd Edition of the Dictionary of Computer Vision & Image Processing is a comprehensive and reliable resource which now provides explanations of over 3500 of the most commonly used terms across image processing, computer vision and related fields including machine vision. It offers clear and concise definitions with short examples or mathematical precision where necessary for clarity that ultimately makes it a very usable reference for new entrants to these fields at senior undergraduate and graduate level, through to early career researchers to help build up knowledge of key concepts. As the book is a useful source for recent terminology and concepts, experienced professionals will also find it a valuable resource for keeping up to date with the latest advances. New features of the 2nd Edition:Contains more than 1000 new terms, notably an increased focus on image processing and machine vision terms;Includes the addition of reference links across the majority of terms pointing readers to further information about the concept under discussion so that they can continue to expand their understanding;Now available as an eBook with enhanced content: approximately 50 videos to further illustrate specific terms; active cross-linking between terms so that readers can easily navigate from one related term to another and build up a full picture of the topic in question; and hyperlinked references to fully embed the text in the current literature.


machine vision applications | 2012

Automatic real-time road marking recognition using a feature driven approach

Alireza Kheyrollahi; Toby P. Breckon

Automatic road marking recognition is a key problem within the domain of automotive vision that lends support to both autonomous urban driving and augmented driver assistance such as situationally aware navigation systems. Here we propose an approach to this problem based on the extraction of robust road marking features via a novel pipeline of inverse perspective mapping and multi-level binarisation. A trained classifier combined with additional rule-based post-processing then facilitates the real-time delivery of road marking information as required. The approach is shown to operate successfully over a range of lighting, weather and road surface conditions.


ieee intelligent vehicles symposium | 2008

Integrated speed limit detection and recognition from real-time video

Marcin L. Eichner; Toby P. Breckon

Here we propose a complete system for robust detection and recognition of the current speed sign restrictions from a moving road vehicle. This approach includes the detection and recognition of both numerical limit and national limit (cancellation) signs with the addition of automatic vehicle turn detection. The system utilizes both RANSAC-based colour-shape detection of speed limit signs and neural network based recognition whilst turn analysis relies on an optic flow based method. As primary detection is based on a robust colour and shape detection methodology this results in a real-time algorithm that is invariant to variable road conditions. The integration of both limit, cancellation and vehicle turn detection within the bounds of real-time system performance represents an advance on prior work within this field.


IEEE Transactions on Intelligent Transportation Systems | 2011

Automatic Road Environment Classification

Isabelle Tang; Toby P. Breckon

The ongoing development autonomous vehicles and adaptive vehicle dynamics present in many modern vehicles has generated a need for road environment classification - i.e., the ability to determine the nature of the current road or terrain environment from an onboard vehicle sensor. In this paper, we investigate the use of a low-cost camera vision solution capable of urban, rural, or off-road classification based on the analysis of color and texture features extracted from a drivers perspective camera view. A feature set based on color and texture distributions is extracted from multiple regions of interest in this forward-facing camera view and combined with a trained classifier approach to resolve two road-type classification problems of varying difficulty - {off-road, on-road} environment determination and the additional multiclass road environment problem of {off-road, urban, major/trunk road and multilane motorway/carriageway}. Two illustrative classification approaches are investigated, and the results are reported over a series of real environment data. An optimal performance of ~90% correct classification is achieved for the {off-road, on-road} problem at a near real-time classification rate of 1 Hz.


Pattern Recognition | 2013

A comparison of 3D interest point descriptors with application to airport baggage object detection in complex CT imagery

Gregory T. Flitton; Toby P. Breckon; Najla Megherbi

We present an experimental comparison of 3D feature descriptors with application to threat detection in Computed Tomography (CT) airport baggage imagery. The detectors range in complexity from a basic local density descriptor, through local region histograms and three-dimensional (3D) extensions to both to the RIFT descriptor and the seminal SIFT feature descriptor. We show that, in the complex CT imagery domain containing a high degree of noise and imaging artefacts, a specific instance object recognition system using simpler descriptors appears to outperform a more complex RIFT/SIFT solution. Recognition rates in excess of 95% are demonstrated with minimal false-positive rates for a set of exemplar 3D objects.


Journal of X-ray Science and Technology | 2013

An experimental survey of metal artefact reduction in computed tomography.

Andre Mouton; Najla Megherbi; Katrien Van Slambrouck; Johan Nuyts; Toby P. Breckon

We present a survey of techniques for the reduction of streaking artefacts caused by metallic objects in X-ray Computed Tomography (CT) images. A comprehensive review of the existing state-of-the-art Metal Artefact Reduction (MAR) techniques, drawn predominantly from the medical CT literature, is supported by an experimental comparison of twelve MAR techniques. The experimentation is grounded in an evaluation based on a standard scientific comparison protocol for MAR methods, using a software generated medical phantom image as well as a clinical CT scan. The experimentation is extended by considering novel applications of CT imagery consisting of metal objects in non-tissue surroundings acquired from the aviation security screening domain. We address the shortage of thorough performance analyses in the existing MAR literature by conducting a qualitative as well as quantitative comparative evaluation of the selected techniques. We find that the difficulty in generating accurate priors to be the predominant factor limiting the effectiveness of the state-of-the-art medical MAR techniques when applied to non-medical CT imagery. This study thus extends previous works by: comparing several state-of-the-art MAR techniques; considering both medical and non-medical applications and performing a thorough performance analysis, considering both image quality as well as computational demands.


international conference on industrial technology | 2013

Improving feature-based object recognition for X-ray baggage security screening using primed visualwords

Diana Turcsany; Andre Mouton; Toby P. Breckon

We present a novel Bag-of-Words (BoW) representation scheme for image classification tasks, where the separation of features distinctive of different classes is enforced via class-specific feature-clustering. We investigate the implementation of this approach for the detection of firearms in baggage security X-ray imagery. We implement our novel BoW model using the Speeded-Up Robust Features (SURF) detector and descriptor within a Support Vector Machine (SVM) classifier framework. Experimentation on a large, diverse data set yields a significant improvement in classification performance over previous works with an optimal true positive rate of 99.07% at a false positive rate of 4.31%. Our results indicate that class-specific clustering primes the feature space and ultimately simplifies the classification process. We further demonstrate the importance of using diverse, representative data and efficient training and testing procedures. The excellent performance of the classifier is a strong indication of the potential advantages of this technique in threat object detection in security screening settings.


british machine vision conference | 2012

On Cross-Spectral Stereo Matching using Dense Gradient Features.

Peter Pinggera; Toby P. Breckon; Horst Bischof

Here we address the problem of scene depth recovery within cross-spectral stereo imagery (each image sensed over a differing spectral range). We compare several robust matching techniques which are able to capture local similarities between the structure of cross-spectral images and a range of stereo optimisation techniques for the computation of valid dense depth estimates for this case. As the performance of standard optical camera systems can be severely affected by environmental conditions the use of combined sensing systems operating in differing parts of the electromagnetic spectrum is increasingly common [5]. As a result, an attractive solution is the combination of both optical and thermal images in many sensing and surveillance scenarios as the complementary nature of both modalities can be exploited and the individual drawbacks largely compensated. Despite the inherent stereo setup of this common two sensor deployment, in practical scenarios it is rarely exploited. Here, we specifically deal with the recovery of dense depth information from thermal (far infrared spectrum) and optical (visible spectrum) image pairs where large differences in the characteristics of image pairs make this task significantly more challenging than the common stereo case (Figure 1A).

Collaboration


Dive into the Toby P. Breckon's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge