Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David J. Thornley is active.

Publication


Featured researches published by David J. Thornley.


The Computer Journal | 2010

Probabilistic Approaches to Estimating the Quality of Information in Military Sensor Networks

Duncan Fyfe Gillies; David J. Thornley; Chatschik Bisdikian

Modelling based on probabilistic inference can be used to estimate the quality of information delivered by a military sensor network. Different modelling tools have complementary characteristics that can be leveraged to create an accurate model open to intuitive and efficient querying. In particular, stochastic process models can be used to abstract away from the physical reality by describing it as components that exist in discrete states with probabilistically invoked actions that change the state. The quality of information may be assessed by using the model to compute the probability that reports made by the network to its users are correct. In contrast, dynamic Bayesian network models, which have been used in a variety of military applications, are a more suitable vehicle for understanding the overall network performance and making inferences about the quality of information. Queries can be made simply by instantiating some variables and computing the probability distributions over others. We show that it is possible to combine both modelling tools by constructing a Bayesian network over the state variables of the process algebra model. The sparsity of the resulting Bayesian network allows fast propagation of probabilities, and hence interactive querying for the quality of information.


intelligent environments | 2010

Sensors as a Service Oriented Architecture: Middleware for Sensor Networks

John Ibbotson; Christopher Gibson; Joel Wright; Peter Waggett; Petros Zerfos; Boleslaw K. Szymanski; David J. Thornley

There is a significant challenge in designing, optimizing, deploying and managing complex sensor networks over heterogeneous communications infrastructures. The ITA Sensor Fabric addresses these challenges in the areas of sensor identification and discovery, sensor access and control, and sensor data consumability, by extending the message bus model commonly found in commercial IT infrastructures out to the edge of the network. In this paper we take the message bus model further into a semantically rich, model-based design and analysis approach that considers the sensor network and its contained services as a Service Oriented Architecture. We present an application of a hierarchic schema for nested service definitions together with an initial ontology that describes the assets and services deployed in a sensor network infrastructure.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

Using stochastic process algebra models to estimate the quality of information in military sensor networks

David J. Thornley; Chatschik Bisdikian; Duncan Fyfe Gillies

In a typical military application, a wireless sensor network will operate in diffcult and dynamic conditions. Communication will be affected by local conditions, platform characteristics and power consumption constraints, and sensors may be lost during an engagement. It is clearly of great importance to decision makers to know what quality of information they can expect from a network in battlefield situations. We propose the development of a supporting technology founded in formal modeling, using stochastic process algebras for the development of quality of information measures. A simple example illustrates the central themes of outcome probability distribution prediction, and time-dependency analysis.


Proceedings of SPIE | 2009

Toward mission-specific service utility estimation using analytic stochastic process models

David J. Thornley; Robert James Young; James P. Richardson

Planning a mission to monitor, control or prevent activity requires postulation of subject behaviours, specification of goals, and the identification of suitable effects, candidate methods, information requirements, and effective infrastructure. In an operation that comprises many missions, it is desirable to base decisions to assign assets and computation time or communications bandwidth on the value of the result of doing so in a particular mission to the operation. We describe initial investigations of a holistic approach for judging the value of candidate sensing service designs by stochastic modeling of information delivery, knowledge building, synthesis of situational awareness, and the selection of actions and achievement of goals. Abstraction of physical and information transformations to interdependent stochastic state transition models enables calculation of probability distributions over uncertain futures using wellcharacterized approximations. This complements traditional Monte Carlo war gaming in which example futures are explored individually, by capturing probability distributions over loci of behaviours that show the importance and value of mission component designs. The overall model is driven by sensing processes that are constructed by abstracting from the physics of sensing to a stochastic model of the systems trajectories through sensing modes. This is formulated by analysing probabilistic projections of subject behaviours against functions which describe the quality of information delivered by the sensing service. This enables energy consumption predictions, and when composed into a mission model, supports calculation of situational awareness formulation and command satisfaction timing probabilities. These outcome probabilities then support calculation of relative utility and value.


Proceedings of SPIE | 2009

A stochastic process algebraic abstraction of detection evidence fusion in tactical sensor networks

David J. Thornley; Duncan Fyfe Gillies; Chatschik Bisdikian

The output of a sensor network intended to detect events or objects generally comprises evidentiary reports of features in the environment that may correspond to those phenomena. Signals from multiple sensors are commonly fused to maximize fidelity of detection through for example synergy between different modes of detection, or simple confirmation. We have previously demonstrated the ability to calculate the meaning of a location report as a probability distribution over potential ground truths by using a stochastic process algebraic model compiled to a discrete-state, continuous-time Markov chain, and performing a transient analysis which resembles the process of parameterizing a Bayesian network. We introduce an approach to representing temporal fusion of multiple heterogeneous sensor detections with different modalities and timing characteristics using a stochastic process algebra. This facilitates analysis of probabilistic properties of the system, and inclusion of those properties into larger models. The formal models are translated into continuous time Markov chains, which provide an important trade-off between the approximation of timing information against complexity of analysis. This is vital to the investigation of analytic computation in real world problems. We illustrate this with an example detection-oriented sensing service model emphasizing the impact of timing. Detection probability and confidence is an essential aspect of the quality of information delivered by a sensing service. The present work is part of an effort to develop a formal event detection calculus that captures the essence of sensor information relating to events, such that features and dependencies can be exploited in re-usable, extendible compositional models.


international conference on signal processing | 2007

Novel Anisotropic Multidimensional Convolutional Filters for Derivative Estimation and Reconstruction

David J. Thornley

The Savitzky-Golay convolutional filter matches a polynomial to even-spaced, one dimensional data and uses this to measure smoothed derivatives. We re-examine the fundamental concept behind this filter, and generate a formulation approach with multidimensional, heterogeneous, anisotropic basis functions to provide a general smoothing, derivative measurement and reconstruction filter for arbitrary point clouds using a linear operator in the form of a convolution kernel. This novel approach yields filters for a wide range of applications such as robot vision, medical volumetric time series analysis and numerical differential equation solution on arbitrary meshes or point clouds without resampling. The urge to extend polynomial filters to higher dimensions is obvious yet previously unfulfilled. We provide a novel complete, arbitrary-dimensional approach to their construction, and introduce anisotropy and irregularity.


quantitative evaluation of systems | 2006

Exploring correctness and accuracy of solutions to matrix polynomial equations in queues

David J. Thornley; Harf Zatschler

Spectral expansion and matrix analytic methods are important solution mechanisms for matrix polynomial equations. These equations are encountered in the steady-state analysis of Markov chains with a semi-finite or finite two dimensional lattice of states, which describe a significant class of finite and infinite queues. We prove that the limited size of the eigenspectrum of the matrix geometric representation used in matrix analytic solution mechanisms confines its applicability to systems with a number of eigenvalues less than or equal to the dimension of the matrices used to form the solution. As well as proving this limitation, we relate our experience of a practical queue with generalized exponential traffic whose steady state cannot be represented using one or two rate matrices. We also provide an explanation for the numerical issues creating difficulty in finding matrix geometric solutions for finite queues. While we have not found a solution to these numerical issues, we do outline the steps required to enable complete matrix geometric solutions with larger eigenspectra, but which may not be efficient. On the other hand, we identify a case where care must be taken when using spectral expansion. Essentially, the eigensystem of a finite queue degenerates at saturation. We therefore formulate an enhanced spectral expansion method using generalized eigenvectors, which we prove gives a complete solution, even at saturation. We conclude that the state of the art requires the use of efficient matrix analytic methods where applicable, but correct solution in the general case is currently only guaranteed using generalized spectral expansion. We suggest that use of matrix analytic tools directed toward efficiency for solving a given queueing system should be preceded by an analysis of the eigensystem of the solution through spectral expansion, whether algebraic or numerical, to verify that the solutions produced by such tools are correct


computational intelligence in bioinformatics and computational biology | 2006

Machine Learning in Basecalling -- Decoding Trace Peak Behaviour

David J. Thornley; Stavros Petridis

DNA sequence basecalling is commonly regarded as a solved problem, despite significant error rates being reflected in inaccuracies in databases and genome annotations. These errors commonly arise from an inability to sequence through peak height variations in DNA sequencing traces from the Sanger sequencing method. Recent efforts toward improving basecalling accuracy have taken the form of more sophisticated digital filters and feature detectors. We demonstrate that the variation in peak heights itself encodes novel information which can be used for basecalling. To isolate this information for a clear demonstration, we perform a peculiar blind basecalling experiment using ABI processed output. Using classifiers responding to measurements in the context of the basecalling position, we call bases without reference to the peak heights at the basecalling position itself. Tree classifiers indicate which features are pertinent, and the application of neural nets to these features results in a startlingly high initial success rate of 78%. Our analysis indicates that we can make viable basecalls using information that has never been accessed before


Proceedings of SPIE | 2010

Warfighter decision making performance analysis as an investment priority driver

David J. Thornley; David Dean; James C. Kirk

Estimating the relative value of alternative tactics, techniques and procedures (TTP) and information systems requires measures of the costs and benefits of each, and methods for combining and comparing those measures. The NATO Code of Best Practice for Command and Control Assessment explains that decision making quality would ideally be best assessed on outcomes. Lessons learned in practice can be assessed statistically to support this, but experimentation with alternate measures in live conflict is undesirable. To this end, the development of practical experimentation to parameterize effective constructive simulation and analytic modelling for system utility prediction is desirable. The Land Battlespace Systems Department of Dstl has modeled human development of situational awareness to support constructive simulation by empirically discovering how evidence is weighed according to circumstance, personality, training and briefing. The human decision maker (DM) provides the backbone of the information processing activity associated with military engagements because of inherent uncertainty associated with combat operations. To develop methods for representing the process in order to assess equipment and non-technological interventions such as training and TTPs we are developing componentized or modularized timed analytic stochastic model components and instruments as part of a framework to support quantitative assessment of intelligence production and consumption methods in a human decision maker-centric mission space. In this paper, we formulate an abstraction of the human intelligence fusion process from the Defence Science and Technology Laboratorys (Dstls) INCIDER model to include in our framework, and synthesize relevant cost and benefit characteristics.


ieee international conference on fuzzy systems | 2007

Decoding Trace Peak Behaviour - A Neuro-Fuzzy Approach

David J. Thornley; Stavros Petridis

DNA sequence basecalling is commonly regarded as a solved problem, despite significant error rates being reflected in inaccuracies in databases and genome annotations. This has made measures of confidence of basecalls important, and fuzzy methods have recently been used to approximate confidence by responding to data quality at the calling position. We have demonstrated that variation in contextual sequencing trace data peak heights actively encodes novel information which can be used for basecalling and confidence estimation. Using neuro-fuzzy classifiers we are able to decode much of the hidden contextual information in two fuzzy rules per base and partially reveal its underlying behaviour. Those two fuzzy rules can satisfactory explain over 74% of data samples. The error rate is 6-7% higher on individual bases than when using classification trees, but the number of rules is reduced by a factor of 100. Compact comprehensible knowledge representation is achieved with the use of SANFIS which allows us to easily interpret the embedded knowledge. Finally, we propose a hybrid architecture based on SANFIS which achieves slightly better performance than a classification tree with significantly improved knowledge representation.

Collaboration


Dive into the David J. Thornley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Collinge

UCL Institute of Neurology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert I. Young

Defence Science and Technology Laboratory

View shared research outputs
Top Co-Authors

Avatar

Stavros Petridis

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge