Nicholas J. Redding
Defence Science and Technology Organisation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nicholas J. Redding.
Neural Networks | 1993
Nicholas J. Redding; Adam Kowalczyk; Tom Downs
Constructive learning algorithms are important because they address two practical difficulties of learning in artificial neural networks. First, it is not always possible to determine the minimal network consistent with a particular problem. Second, algorithms like backpropagation can require networks that are larger than the minimal architecture for satisfactory convergence. Further, constructive algorithms have the advantage that polynomial-time learning is possible if network size is chosen by the learning algorithm so that the learning of the problem under consideration is simplified. This article considers the representational ability of feedforward networks (FFNs) in terms of the fan-in required by the hidden units of a network. We define network order to be the maximum fan-in of the hidden units of a network. We prove, in terms of the problems they may represent, that a higher-order network (HON) is at least as powerful as any other FFN architecture when the order of the networks are the same. Next, we present a detailed theoretical development of a constructive, polynomial-time algorithm that will determine an exact HON realization with minimal order for an arbitrary binary or bipolar mapping problem. This algorithm does not have any parameters that need tuning for good performance. We show how an FFN with sigmoidal hidden units can be determined from the HON realization in polynomial time. Last, simulation results of the constructive HON algorithm are presented for the two-or-more clumps problem, demonstrating that the algorithm performs well when compared with the Tiling and Upstart algorithms.
digital image computing: techniques and applications | 2008
Julius Fabian Ohmer; Nicholas J. Redding
Many computer vision methods rely on frame registration information obtained with algorithms such as the Kanade-Lucas-Tomasi (KLT) feature tracker, which is known for its excellent performance in that area. Various research groups proposed methods to extend its performance, both in terms of execution time and stability. Recent research has shown that current graphics processing units (GPUs) have proven to be very efficient SIMD parallel processing architectures that can be used to speed up the execution time of the KLT algorithm by an order of a magnitude compared with an ordinary CPU implementation and thus making it suitable for real-time applications on commodity hardware. Previous publications demonstrated the use of the GPU for the tracking and image processing part of the KLT algorithm. One essential, but computationally demanding step of the KLT algorithm is the feature selection step. It injects fresh feature points into the existing point set and thus enables the KLT to continuously track points throughout an image sequence. Those sequences can contain rapid movements or they may be of low quality. In such situations, features diminish rapidly and the step must be performed potentially for every frame. Thus, the performance of the otherwise efficient GPU implementation declines substantially, as this step includes a sorting operation on the sparse feature map, which is difficult to implement efficiently on the GPU. We use the KLT algorithm to calculate a stable frame registration with high accuracy using the feature point set. We found that maintaining a well distributed feature set through frequent injections of points is an essential requirement. In this paper we will demonstrate an alternative feature reselection method that can be efficiently implemented on the GPU. It is a Monte-Carlo-based approximation of the original method and leads to very good tracking results with just a fraction of the computational cost.
digital image computing: techniques and applications | 2005
Ronald Jones; Branko Ristic; Nicholas J. Redding; David M. Booth
This paper describes a moving target indication and tracking system developed for acquiring and tracking targets in video from moving sensors, in particular airborne urban surveillance video. The paradigm of the moving sensor, which is a typical scenario in defence applications (e.g. UAV surveillance video), poses some unique problems as compared to the stationary sensor. Our solution draws on a number of algorithms from the vision and tracking communities and combines them as a novel solution to the problem. Moreover, given a suitable cluster of conventional hardware, the system allows a near real-time solution to MTI.
Digital Signal Processing | 2000
Tristrom Cooke; Nicholas J. Redding; Jim Schroeder; Jingxin Zhang
Several methods are available that capture the statistics of radar imagery. The best features, in the sense of man-made target discrimination, are expected to be different for different types of natural background, and for different objects of interest such as vehicles. We demonstrate that discrimination of natural background and man-made objects using low resolution synthetic aperture radar imagery is possible using multiscale autoregressive (MAR), multiscale autoregressive moving average (MARMA) models, and singular value decomposition (SVD) methods. We use the model coefficients, moments of the model residual vectors, a subset of eigenvectors, and moments of the selected eigenvectors, as features for target discrimination. All the test imagery used here was 1.5 metre resolution.
intelligent information systems | 1996
David I. Kettler; Nicholas J. Redding
A technique for cleaning up a thinned object in an image is presented. This trimming technique removes extraneous lines due to artifacts in the thinning procedure and certain forms of noise that are present in the original image. The method was developed as a part of a system for extracting information from oblique ionograms. The performance of the method for this case is demonstrated and some implementation details are discussed.
digital image computing: techniques and applications | 2008
Nicholas J. Redding; Julius Fabian Ohmer; Judd Kelly; Tristrom Cooke
This paper describes a prototype system for performing handover between cameras with non-overlapping views. The design is being used to identify problems that may arise in the development of a larger, more capable, and fully automatic system. If there is no information about the spatio-temporal relationship between cameras to assist in matching individuals, similarities in appearance may be used. Here, the objects appearance is represented by a vector of features calculated from its delineation. The features considered are the scale-invariant feature transform, grey-level co-occurrence matrix features, local binary patterns, Zernike moments and some simple colour features. The system has been tested on a difficult surveillance scenario, which considers opposing views of the subjects (frontal presentation in one sequence matched with rear presentation in the other, and vice versa). Several classification strategies are employed to determine the best match across presentations of the subjects in each sequence. The quality of the results was lower than expected but provides useful information for future robustification of the system.
digital image computing: techniques and applications | 2007
Julius Fabian Ohmer; Peter G. Perry; Nicholas J. Redding
A background model is constructed to detect moving objects in video sequences. Subsequent frames are stacked on top of each other using associated registration information that must be obtained in a preprocessing step. The change of the brightness value in each pixel over time might be caused by a moving object. Current graphics processing units (GPUs) have proven to be very capable parallel processing architectures which can outperform current CPUs by an order of a magnitude. This paper will present how current GPUs can be used to rapidly construct background models from a sequence of video still frames. We will specifically discuss how the implementation can benefit from special features of GPUs that are available in the graphics API OpenGL.
image and vision computing new zealand | 2012
Tony Scoleri; Shannon Fehlmann; David M. Booth; Robert Christie; Martin Hamlyn; Nicholas J. Redding
This paper describes a system to perform real-time Euclidean measurement of scene objects captured from an airborne platform. Novelty resides in the adoption of a video calibration technique which recovers the absolute scene scale from a single ground reference length. The sensor footprint, from which the reference length is derived, is itself acquired by intersecting the sensor field of view with a digital elevation map, rather than the more usual registration of a video frame with an orthographic reference image. The footprint, together with the sensor and platform parameters, are stored in the video metadata for immediate or offline processing. Compliance with Motion Imagery Standards Board standards ensures interoperability and, in particular, that the subsequent camera calibration utilities can be platform independent. Once the camera is calibrated, auxiliary video metrology methods provide 3-D photogrammetric tools for object-level analysis. Two system workflows are presented which allow for live scene mensuration of both autonomously tracked targets or manually chosen ones. Experiments included measuring heights and ground distances from real-life surveillance videos. Comparison with truth and three measurement methods validates our systems performance.
digital image computing: techniques and applications | 2005
David M. Booth; Nicholas J. Redding; Ronald Jones; M. Smith; I. J. Lucas; K. L. Jones
This paper deals with the provision of cueing aids for Imagery Analysts (IAs) who may need to integrate many types of imagery, intelligence, collateral data, and military doctrine to be effective. This task is becoming increasingly difficult as the volume and diversity of inputs increases and as adversaries become more sophisticated technologically. We outline the issues (particularly that of interoperability with other systems) that have influenced the development and use of our reconfigurable analysts’ exploitation infrastructure. At the centre is The Analysts’ Detection Support System (ADSS) which provides a flexible processing engine for sensor data. We describe ADSS, some of its current functionality, and it’s context and use within a federated exploitation regime. Examples of it’s use in wide-area search and surveillance applications are outlined.
international conference on acoustics, speech, and signal processing | 2003
Jingxin Zhang; Jim Schroeder; Nicholas J. Redding
The paper investigates the impact of SAR image enhancement on the performance of small target detection in SAR images. Three SAR image enhancement algorithms are evaluated on large SAR image data-sets. The evaluation results show that image enhancement can greatly improve the performance of false alarm mitigation, and that the level of performance improvement is correlated with the resolution and background suppression of the enhanced images. The higher the resolution and the level of background suppression, the higher the level of performance improvement.