Gordon Inggs
Imperial College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gordon Inggs.
Sensors | 2012
Antoine B. Bagula; Marco Zennaro; Gordon Inggs; Simon Scott; David Gascon
This paper presents a new Ubiquitous Sensor Network (USN) Architecture to be used in developing countries and reveals its usefulness by highlighting some of its key features. In complement to a previous ITU proposal, our architecture referred to as “Ubiquitous Sensor Network for Development (USN4D)” integrates in its layers features such as opportunistic data dissemination, long distance deployment and localisation of information to meet the requirements of the developing world. Besides describing some of the most important requirements for the sensor equipment to be used in a USN4D setting, we present the main features and experiments conducted using the “WaspNet” as one of the wireless sensor deployment platforms that meets these requirements. Furthermore, building upon “WaspNet” platform, we present an application to Air pollution Monitoring in the city of Cape Town, in South Africa as one of the first steps towards building community wireless sensor networks (CSN) in the developing world using off-the-shelf sensor equipment.
ieee radar conference | 2011
Michael Inggs; Gordon Inggs; Alan Langman; Simon Scott
Rhino is a hardware and software tool flow designed for software defined radio applications. We show that Rhino can be used for rapid prototyping of radar hardware systems, with minimal adjustments to the core Rhino system, with the new tool set being called, Rhinoradar. The radar user is able to specify desired radar configurations (waveforms, repetition rates, sampling schemes, matched filtering) via simple GNURadio-like processing blocks. The radar user is thus largely screened from complex Hardware Description Language (HDL) coding. The Rhino hardware provides two FMC interfaces, giving access to a wide range of commercial A/D and D/A boards. It also supports the IEEE 1558 network timing standard, allowing multiple boards to be synchronised via an Ethernet Network. Dual 10 gigabit network interfaces allow for real time data streaming for recording or further signal processing.
international geoscience and remote sensing symposium | 2011
Michael Inggs; Gordon Inggs; Stephan Sandenbergh; Waddah A. Al-Ashwal; Karl Woodbridge; H.D. Griffiths
NetRad, consisting of three fully coherent S-Band (2400 MHz) nodes, synchronised by clock and trigger cables, or, GPS Disciplined Oscillators (GPSDOs) with a programmable trigger, has been underdevelopment at UCL and UCT for a number of years. Basic output power is low (200 mW), but a high power transmitter (500W peak) is available for one node. The cable links limit bistatic baselines to some 100m but the GPS Oscillator option allows for full multistatic operation, including forward scatter. The nodes are tied together by means of a wireless network (5 GHz) and command and control software, which allows the radars to be triggered remotely via the GPSDOs, and the data capture parameters of the nodes to be altered at will. Sensible results on small targets out to 2 km are possible, longer for large targets. We show measurements from trials on sea clutter and moving targets, in the south of the UK and near Simons Town, South Africa.
international conference on parallel processing | 2013
Gordon Inggs; David B. Thomas; Wayne Luk
This paper presents the Forward Financial Framework (F3), an application framework for describing and implementing forward looking financial computations on high performance, heterogeneous platforms. F3 allows the computational finance problem specification to be captured precisely yet succinctly, then automatically creates efficient implementations for heterogeneous platforms, utilising both multi-core CPUs and FPGAs. The automatic mapping of a high-level problem description to a low-level heterogeneous implementation is possible due to the domain-specific knowledge which is built in F3, along with a software architecture that allows for additional domain knowledge and rules to be added to the framework. Currently the system is able to utilise domain-knowledge of the run-time characteristics of pricing tasks to partition pricing problems and allocate them to appropriate compute resources, and to exploit relationships between financial instruments to balance computation against communication. The versatility of the framework is demonstrated using a benchmark of option pricing problems, where F3 achieves comparable speed and energy efficiency to external manual implementations. Further, the domain-knowledge guided partitioning scheme suggests a partitioning of subtasks that is 13% faster than the average, while exploiting domain dependencies to reduce redundant computations results in an average gain in efficiency of 27%.
field programmable logic and applications | 2012
Gordon Inggs; David B. Thomas; Simon Winberg
This paper provides a novel way of trading increased resource utilisation for decreased latency when computing a single Discrete Fourier Transform on the FPGA. Analysis conducted on the Cooley-Tukey FFT optimisation shows that it increases the number of operations in the critical path of the transform computation. Consequentially an algorithm is proposed which allows control over the degree to which the Cooley-Tukey optimisation is utilised, trading between resource utilisation and absolute latency. The resource utilisation and latency results for the MyHDL implementation of the proposed algorithm upon the Rhino platform are provided which demonstrate that a practical Pareto curve has been established for a variety of dataset sizes. This implementation is also compared to Xilinxs FFT IP core, providing 14% better latency performance than the manufacturers implementation albeit at a greater resource cost.
Archive | 2015
Gordon Inggs; Shane T. Fleming; David B. Thomas; Wayne Luk
High-Level Synthesis (HLS) tools for Field Programmable Gate Arrays (FPGAs) have made considerable progress in recent years, and are now ready for deployment in an industrial setting. This claim is supported by a case study of the pricing of a benchmark of Black-Scholes (BS) and Heston model-based options using a Monte Carlo Simulations approach. Using a high-level synthesis (HLS) tool such as Xilinx’s Vivado HLS, Altera’s OpenCL SDK or Maxeler’s MaxCompiler, a functionally correct FPGA implementation can be developed from a high level description based upon the MapReduce programming model in a short time. This direct source code implementation is however unlikely to meet performance expectations, and so a series of optimisations can be applied to use the target FPGA’s resource more efficiently. When a combination of task and pipeline parallelism as well as C-slowing optimisations are considered for the problem in this case study, the Vivado HLS implementation is 9.5 times faster than a sequential CPU implementation, the Altera OpenCL 221 times faster and Maxeler 204 times, the sort of acceleration expected of custom architectures. Compared to the 31 times improvement shown by an optimised Multicore CPU implementation, the 60 times improvement by a GPU and 207 times by a Xeon Phi, these results suggest that HLS is indeed ready for business.
Sensors | 2009
Antoine B. Bagula; Gordon Inggs; Simon Scott; Marco Zennaro
This paper revisits the problem of the readiness for field deployments of wireless sensor networks by assessing the relevance of using Open Hardware and Software motes for environment monitoring. We propose a new prototype wireless sensor network that fine-tunes SquidBee motes to improve the life-time and sensing performance of an environment monitoring system that measures temperature, humidity and luminosity. Building upon two outdoor sensing scenarios, we evaluate the performance of the newly proposed energy-aware prototype solution in terms of link quality when expressed by the Received Signal Strength, Packet Loss and the battery lifetime. The experimental results reveal the relevance of using the Open Hardware and Software motes when setting up outdoor wireless sensor networks.
IEEE Transactions on Parallel and Distributed Systems | 2017
Gordon Inggs; David B. Thomas; Wayne Luk
Users of heterogeneous computing systems face two problems: first, in understanding the trade-off relationships between the observable characteristics of their applications, such as latency and quality of the result, and second, how to exploit knowledge of these characteristics to allocate work to distributed computing platforms efficiently. A domain specific approach addresses both of these problems. By considering a subset of operations or functions, models of the observable characteristics or domain metrics may be formulated in advance, and populated at run-time for task instances. These metric models can then be used to express the allocation of work as a constrained integer program. These claims are illustrated using the domain of derivatives pricing in computational finance, with the domain metrics of workload latency and pricing accuracy. For a large, varied workload of 128 Black-Scholes and Heston model-based option pricing tasks, running upon a diverse array of 16 Multicore CPUs, GPUs and FPGAs platforms, predictions made by models of both the makespan and accuracy are generally within 10 percent of the run-time performance. When these models are used as inputs to machine learning and MILP-based workload allocation approaches, a latency improvement of up to 24 and 270 times over the heuristic approach is seen.
ieee international conference on high performance computing data and analytics | 2016
Andrea Picciau; Gordon Inggs; John Wickerson; Eric C. Kerrigan; George A. Constantinides
Many numerical optimisation problems rely on fast algorithms for solving sparse triangular systems of linear equations (STLs). To accelerate the solution of such equations, two types of approaches have been used: on GPUs, concurrency has been prioritised to the disadvantage of data locality, while on multi-core CPUs, data locality has been prioritised to the disadvantage of concurrency. In this paper, we discuss the interaction between data locality and concurrency in the solution of STLs on GPUs, and we present a new algorithm that balances both. We demonstrate empirically that, subject to there being enough concurrency available in the input matrix, our algorithm outperforms Nvidias concurrency-prioritising CUSPARSE algorithm for GPUs. Experimental results show a maximum speedup of 5.8-fold. Our solution algorithm, which we have implemented in OpenCL, requires a pre-processing phase that partitions the graph associated with the input matrix into sub-graphs, whose data can be stored in low-latency local memories. This preliminary analysis phase is expensive, but because it depends only on the input matrix, its cost can be amortised when solving for many different right-hand sides.
ETHICS '14 Proceedings of the IEEE 2014 International Symposium on Ethics in Engineering, Science, and Technology | 2014
Gordon Inggs
I argue that systems engineering is better served by greater demographic diversity. Demographic diversity are those characteristics recorded on official records such as census forms, while systems engineering is a holist approach to engineering. Greater demographic diversity amongst a group of engineers leads to better systems engineering outcomes because the diversity increases the probability of a increased diversity of thought, and this in turn improves system engineering outcomes. There are three substantial challenges to this argument however, firstly that the relationship between diversity of demography and thought might be exaggerated, secondly that the lack of appreciation of diversity needs to be explained and finally, that institutional culture could overwhelm personal characteristics.