Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Scott F. Page is active.

Publication


Featured researches published by Scott F. Page.


international conference on multisensor fusion and integration for intelligent systems | 2016

Towards integrated threat assessment and sensor management: Bayesian multi-target search

Scott F. Page; James P. Oldfield; Paul A. Thomas

Currently, most land intelligence, surveillance and reconnaissance (ISR) systems, especially those employed in critical infrastructure protection contexts, comprise of a suite of sensors (e.g. EO/IR, radar, etc.) loosely integrated into a central command and control (C2) system with limited autonomy. We consider a concept of a modular and autonomous architecture where a set of heterogeneous autonomous sensor modules (ASMs) connect to a high-level decision making module (HLDMM) in a plug and play manner. Working towards an integrated threat evaluation and sensor management approach which is capable of optimizing the ASM suite to search for, localise, and capture relevant imagery of multiple threats in and around the area under protection, we propose a Bayesian multi-target search algorithm. In contrast to earlier work we demonstrate how the algorithm can reduce the time to acquire threats through incorporation of target dynamics. The derivation of the algorithm from an information-theoretic perspective is given and its links with the probability hypothesis density (PHD) filter are explored. We discuss the results of a demonstration HLDMM system which embodies the search algorithm and was tested in realistic base protection scenarios with live sensors and targets.


Proceedings of SPIE | 2016

Toward sensor modular autonomy for persistent land intelligence surveillance and reconnaissance (ISR)

Paul A. Thomas; Gillian Fiona Marshall; David Andrew Alexander Faulkner; Philip John Kent; Scott F. Page; Simon Islip; James P. Oldfield; Toby P. Breckon; Mikolaj E. Kundegorski; David J. Clark; Tim Styles

Currently, most land Intelligence, Surveillance and Reconnaissance (ISR) assets (e.g. EO/IR cameras) are simply data collectors. Understanding, decision making and sensor control are performed by the human operators, involving high cognitive load. Any automation in the system has traditionally involved bespoke design of centralised systems that are highly specific for the assets/targets/environment under consideration, resulting in complex, non-flexible systems that exhibit poor interoperability. We address a concept of Autonomous Sensor Modules (ASMs) for land ISR, where these modules have the ability to make low-level decisions on their own in order to fulfil a higher-level objective, and plug in, with the minimum of preconfiguration, to a High Level Decision Making Module (HLDMM) through a middleware integration layer. The dual requisites of autonomy and interoperability create challenges around information fusion and asset management in an autonomous hierarchical system, which are addressed in this work. This paper presents the results of a demonstration system, known as Sensing for Asset Protection with Integrated Electronic Networked Technology (SAPIENT), which was shown in realistic base protection scenarios with live sensors and targets. The SAPIENT system performed sensor cueing, intelligent fusion, sensor tasking, target hand-off and compensation for compromised sensors, without human control, and enabled rapid integration of ISR assets at the time of system deployment, rather than at design-time. Potential benefits include rapid interoperability for coalition operations, situation understanding with low operator cognitive burden and autonomous sensor management in heterogenous sensor systems.


Proceedings of SPIE | 2012

A method for 3D scene recognition using shadow information and a single fixed viewpoint

David C. Bamber; Jeremy D. Rogers; Scott F. Page

The ability to passively reconstruct a scene in 3D provides significant benefit to Situational Awareness systems employed in security and surveillance applications. Traditionally, passive 3D scene modelling techniques, such as Shape from Silhouette, require images from multiple sensor viewpoints, acquired either through the motion of a single sensor or from multiple sensors. As a result, the application of these techniques often attracts high costs, and presents numerous practical challenges. This paper presents a 3D scene reconstruction approach based on exploiting scene shadows, which only requires information from a single static sensor. This paper demonstrates that a large amount of 3D information about a scene can be interpreted from shadows; shadows reveal the shape of objects as viewed from a solar perspective and additional perspectives are gained as the sun arcs across the sky. The approach has been tested on synthetic and real data and is shown to be capable of reconstructing 3D scene objects where traditional 3D imaging methods fail. Providing the shadows within a scene are discernible, the proposed technique is able to reconstruct 3D objects that are camouflaged, obscured or even outside of the sensors Field of View. The proposed approach can be applied in a range of applications, for example urban surveillance, checkpoint and border control, critical infrastructure protection and for identifying concealed or suspicious objects or persons which would normally be hidden from the sensor viewpoint.


Proceedings of SPIE | 2009

Information abstraction for enhanced image fusion based surveillance systems

Scott F. Page; Moira I. Smith; Duncan Hickman; Paul K. Kimber

The benefits of image fusion for man-in-the-loop Detection, Recognition, and Identification (DRI) tasks are well known. However, the performance of conventional image fusion systems is typically sub-optimal, as they fail to capitalise on high-level information which can be abstracted from the imagery. As part of a larger study into an Intelligent Image Fusion (I2F) framework, this paper presents a novel approach which exploits high-level cues to adaptively enhance the fused image via feedback to the pixel-level processing. Two scenarios are chosen for illustrative application of the approach, Situational Awareness and Anomalous Object Detection (AOD). In the Situational Awareness scenario, motion and other cues are used to enhance areas of the image according to predefined tasks, such as the detection of moving targets of a certain size. This yields a large increase in Local Signal-to-Clutter Ratio (LSCR) when compared to a baseline, non-adaptive approach. In the AOD scenario, spatial and spectral information is used to direct a foveal-patch image fusion algorithm. This demonstrates a significant increase in the Probability of Detection on test imagery whilst simultaneously reducing the mean number of false alarms when compared to a baseline, non-foveal approach. This paper presents the rationale for the I2F approach and details two specific examples of how it can be applied to address very different applications. Design details and quantitative performance analysis results are reported.


Signal Processing, Sensor/Information Fusion, and Target Recognition XXVII | 2018

Enabling self-configuration of fusion networks via scalable opportunistic sensor calibration

Murat Uney; Keith Copsey; Scott F. Page; Bernard Mulgrew; Paul A. Thomas

The range of applications in which sensor networks can be deployed depends heavily on the ease with which sensor locations/orientations can be registered and the accuracy of this process. We present a scalable strategy for algorithmic network calibration using sensor measurements from non-cooperative objects. Specifically, we use recently developed separable likelihoods in order to scale with the number of sensors whilst capturing the overall uncertainties. We demonstrate the efficacy of our self-configuration solution using a real network of radar and lidar sensors for perimeter protection and compare the accuracy achieved to manual calibration.


Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies II | 2018

An architecture for sensor modular autonomy for counter-UAS

Paul A. Thomas; Gillian Fiona Marshall; David C. Lugton; David Andrew Alexander Faulkner; Scott F. Page; Simon Islip; Russell Brandon

This paper discusses a modular system architecture for detection, classification and localisation of Unmanned Aerial System (UAS) targets, consisting of intelligent Autonomous Sensor Modules (ASMs), a High-Level Decision Making Module (HLDMM), a middleware integration layer and an end-user GUI, under the previously-reported SAPIENT framework. This enables plug and play sensor integration and autonomous fusion, including prediction of the trajectory of the vehicle for sensor cueing, multi-modal sensor fusion and target hand-off. The SAPIENT Counter-UAS (C-UAS) system was successfully demonstrated in a live trial against a range of UAS targets flown in a variety of attack trajectories, using radar and Electro-Optic (EO) C-UAS ASMs. In addition, the trial also demonstrated the use of synthetic sensors, on their own and in combination with real sensors. Outputs of all the available sensors were tracked and fused by the Cubica SAPIENT HLDMM which then steered narrow field-of-view cameras onto the predicted 3D position of the UAS. The operator was provided with a map-based view showing alerts and tracks to provide situational awareness, together with snapshots from the EO sensor and video feeds from the steerable narrow field of view cameras. This demonstrates an effective C-UAS system operating entirely autonomously, with autonomous detection, localisation, classification, tracking, fusion and sensor management, leading to “eyes on” the aerial threat, and all happening with zero operator intervention and in real-time.


Proceedings of SPIE | 2012

Spatio-temporal features for tracking and quadruped/biped discrimination

Rick Rickman; Keith Copsey; David C. Bamber; Scott F. Page

Techniques such as SIFT and SURF facilitate efficient and robust image processing operations through the use of sparse and compact spatial feature descriptors and show much potential for defence and security applications. This paper considers the extension of such techniques to include information from the temporal domain, to improve utility in applications involving moving imagery within video data. In particular, the paper demonstrates how spatio-temporal descriptors can be used very effectively as the basis of a target tracking system and as target discriminators which can distinguish between bipeds and quadrupeds. Results using sequences of video imagery of walking humans and dogs are presented, and the relative merits of the approach are discussed.


Proceedings of SPIE | 2010

Adaptive image kernels for maximising image quality

David C. Bamber; Scott F. Page; Matthew Bolsover; Duncan Hickman; Moira I. Smith; Paul K. Kimber

This paper discusses a novel image noise reduction strategy based on the use of adaptive image filter kernels. Three adaptive filtering techniques are discussed and a case study based on a novel Adaptive Gaussian Filter is presented. The proposed filter allows the noise content of the imagery to be reduced whilst preserving edge definition around important salient image features. Conventional adaptive filtering approaches are typically based on the adaptation of one or two basic filter kernel properties and use a single image content measure. In contrast, the technique presented in this paper is able to adapt multiple aspects of the kernel size and shape automatically according to multiple local image content measures which identify pertinent features across the scene. Example results which demonstrate the potential of the technique for improving image quality are presented. It is demonstrated that the proposed approach provides superior noise reduction capabilities over conventional filtering approaches on a local and global scale according to performance measures such as Root Mean Square Error, Mutual Information and Structural Similarity. The proposed technique has also been implemented on a Commercial Off-the-Shelf Graphical Processing Unit platform and demonstrates excellent performance in terms of image quality and speed, with real-time frame rates exceeding 100Hz. A novel method which is employed to help leverage the gains of the processing architecture without compromising performance is discussed.


Proceedings of SPIE | 2009

Adaptive processing for enhanced target acquisition

Scott F. Page; Moira I. Smith; Duncan Hickman; Mark Bernhardt; William J. Oxford; Norman Frederick Watson; F. Beath

Conventional air-to-ground target acquisition processes treat the image stream in isolation from external data sources. This ignores information that may be available through modern mission management systems which could be fused into the detection process in order to provide enhanced performance. By way of an example relating to target detection, this paper explores the use of a-priori knowledge and other sensor information in an adaptive architecture with the aim of enhancing performance in decision making. The approach taken here is to use knowledge of target size, terrain elevation, sensor geometry, solar geometry and atmospheric conditions to characterise the expected spatial and radiometric characteristics of a target in terms of probability density functions. An important consideration in the construction of the target probability density functions are the known errors in the a-priori knowledge. Potential targets are identified in the imagery and their spatial and expected radiometric characteristics are used to compute the target likelihood. The adaptive architecture is evaluated alongside a conventional non-adaptive algorithm using synthetic imagery representative of an air-to-ground target acquisition scenario. Lastly, future enhancements to the adaptive scheme are discussed as well as strategies for managing poor quality or absent a-priori information.


Proceedings of SPIE | 2010

Cooperative energy harvesting for long-endurance autonomous vehicle teams

Scott F. Page; J. D. Rogers; K. May; D. R. Myatt; Duncan Hickman; Moira I. Smith

Collaboration


Dive into the Scott F. Page's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Murat Uney

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William J. Oxford

Defence Science and Technology Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge