Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Richmond is active.

Publication


Featured researches published by Paul Richmond.


TPCG | 2010

Agent-based Large Scale Simulation of Pedestrians With Adaptive Realistic Navigation Vector Fields

Twin Karmakharm; Paul Richmond; Daniela M. Romano

A large scale pedestrian simulation method, implemented with an agent based modelling paradigm, is presented within this paper. It allows rapid prototyping and real-time modifications, suitable for quick generation and testing of the viability of pedestrian movement in urban environments. The techniques described for pedestrian simulation make use of parallel processing through graphics card hardware allowing simulation scales to far exceed those of serial frameworks for agent based modelling. The simulation has been evaluated through benchmarking of the performances manipulating population size, navigation grid, and averaged simulation steps. The results demonstrate that this is a robust and scalable method for implementing pedestrian navigation behaviour. Furthermore an algorithm for generating smooth and realistic pedestrian navigation paths that works well in both small and large spaces is presented. An adaptive smoothing function has been utilised to optimise the path used by pedestrian agents to navigate around in a complex dynamic environment. Optimised and un-optimised vectors maps obtained by applying or not such function are compared, and the results show that the optimized path generates a more realistic flow.


Neuroinformatics | 2014

From Model Specification to Simulation of Biologically Constrained Networks of Spiking Neurons

Paul Richmond; Alex Cope; Kevin N. Gurney; David J. Allerton

A declarative extensible markup language (SpineML) for describing the dynamics, network and experiments of large-scale spiking neural network simulations is described which builds upon the NineML standard. It utilises a level of abstraction which targets point neuron representation but addresses the limitations of existing tools by allowing arbitrary dynamics to be expressed. The use of XML promotes model sharing, is human readable and allows collaborative working. The syntax uses a high-level self explanatory format which allows straight forward code generation or translation of a model description to a native simulator format. This paper demonstrates the use of code generation in order to translate, simulate and reproduce the results of a benchmark model across a range of simulators. The flexibility of the SpineML syntax is highlighted by reproducing a pre-existing, biologically constrained model of a neural microcircuit (the striatum). The SpineML code is open source and is available at http://bimpa.group.shef.ac.uk/SpineML.


GPU Computing Gems Emerald Edition | 2011

Template-Driven Agent-Based Modeling and Simulation with CUDA

Paul Richmond; Daniela M. Romano

Publisher Summary This chapter describes a number of key techniques that are used to implement a flexible agent-based modeling (ABM) framework entirely on the GPU in CUDA. Performance rates, which is better than high-performance computing (HPC) clusters, can easily be achieved. Agent-based modeling is a technique for computational simulation of complex interacting systems through the specification of the behavior of a number of autonomous individuals acting simultaneously. The focus on individuals is considerably more computationally demanding than top–down system-level simulation, but provides a natural and flexible environment for studying systems demonstrating emergent behavior. Massive population sizes can be simulated, far exceeding those that can be computed (in reasonable time constraints) within traditional ABM toolkits. The use of data parallel methods ensures that the techniques used within this chapter are applicable to emerging multicore and data parallel architectures that will continue to increase their level of parallelism to improve performance. The concept of a flexible architecture is built around the use of a neutral modeling language (XML) for agents. The technique of template-driven dynamic code generation specifically using XML template processing is also general enough to be effective in other domains seeking to solve the issue of portability and abstraction of modeling logic from simulation code.


PLOS ONE | 2011

Democratic Population Decisions Result in Robust Policy-Gradient Learning: A Parametric Study with GPU Simulations

Paul Richmond; Lars Buesing; Michele Giugliano; Eleni Vasilaki

High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a “non-democratic” mechanism), achieve mediocre learning results at best. In absence of recurrent connections, where all neurons “vote” independently (“democratic”) for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated.


Archive | 2016

Large-Scale Simulations with FLAME

Simon Coakley; Paul Richmond; Marian Gheorghe; Shawn Chin; Dj Worth; Mike Holcombe; Chris Greenough

This chapter presents the latest stage of the FLAME development—the high-performance environment FLAME-II and the parallel architecture designed for Graphics Processing Units, FLAMEGPU. The architecture and the performances of these two agent-based software environments are presented, together with illustrative large-scale simulations for systems from biology, economy, psychology and crowd behaviour applications.


Simulation | 2017

PI-FLAME

Shailesh Tamrakar; Paul Richmond; Roshan M. D'Souza

Agent-based models (ABMs) are increasingly being used to study population dynamics in complex systems, such as the human immune system. Previously, Folcik et al. (The basic immune simulator: an agent-based model to study the interactions between innate and adaptive immunity. Theor Biol Med Model 2007; 4: 39) developed a Basic Immune Simulator (BIS) and implemented it using the Recursive Porous Agent Simulation Toolkit (RePast) ABM simulation framework. However, frameworks such as RePast are designed to execute serially on central processing units and therefore cannot efficiently handle large model sizes. In this paper, we report on our implementation of the BIS using FLAME GPU, a parallel computing ABM simulator designed to execute on graphics processing units. To benchmark our implementation, we simulate the response of the immune system to a viral infection of generic tissue cells. We compared our results with those obtained from the original RePast implementation for statistical accuracy. We observe that our implementation has a 13× performance advantage over the original RePast implementation.


Neuroinformatics | 2017

SpineCreator: a Graphical User Interface for the Creation of Layered Neural Models

Alex Cope; Paul Richmond; Sebastian S. James; Kevin N. Gurney; David J. Allerton

There is a growing requirement in computational neuroscience for tools that permit collaborative model building, model sharing, combining existing models into a larger system (multi-scale model integration), and are able to simulate models using a variety of simulation engines and hardware platforms. Layered XML model specification formats solve many of these problems, however they are difficult to write and visualise without tools. Here we describe a new graphical software tool, SpineCreator, which facilitates the creation and visualisation of layered models of point spiking neurons or rate coded neurons without requiring the need for programming. We demonstrate the tool through the reproduction and visualisation of published models and show simulation results using code generation interfaced directly into SpineCreator. As a unique application for the graphical creation of neural networks, SpineCreator represents an important step forward for neuronal modelling.


Archive | 2008

Automatic Generation of Residential Areas using Geo-Demographics

Paul Richmond; Daniela M. Romano

The neighbourhood aspect of city models is often overlooked in methods of generating detailed city models. This paper identifies two distinct styles of virtual city generation and highlights the weaknesses and strengths of both, before proposing a geo-demographically based solution to automatically generate 3D residential neighbourhood models suitable for use within simulative training. The algorithms main body of work focuses on a classification based system which applies a texture library of captured building instances to extruded and optimised virtual buildings created from 2D GIS data.


bioRxiv | 2016

The Fruit Fly Brain Observatory: from structure to function

Nikul H. Ukani; Chung-Heng Yeh; Adam Tomkins; Yiyin Zhou; Dorian Florescu; Carlos Luna Ortiz; Yu-Chi Huang; Cheng-Te Wang; Paul Richmond; Chung-Chuan Lo; Daniel Coca; Ann-Shyn Chiang; Aurel A. Lazar

The Fruit Fly Brain Observatory (FFBO) is a collaborative effort between experimentalists, theorists and computational neuroscientists at Columbia University, National Tsing Hua University and Sheffield University with the goal to (i) create an open platform for the emulation and biological validation of fruit fly brain models in health and disease, (ii) standardize tools and methods for graphical rendering, representation and manipulation of brain circuits, (iii) standardize tools for representation of fruit fly brain data and its abstractions and support for natural language queries, (iv) create a focus for the neuroscience community with interests in the fruit fly brain and encourage the sharing of fruit fly brain structural data and executable code worldwide. NeuroNLP and NeuroGFX, two key FFBO applications, aim to address two major challenges, respectively: i) seamlessly integrate structural and genetic data from multiple sources that can be intuitively queried, effectively visualized and extensively manipulated, ii) devise executable brain circuit models anchored in structural data for understanding and developing novel hypotheses about brain function. NeuroNLP enables researchers to use plain English (or other languages) to probe biological data that are integrated into a novel database system, called NeuroArch, that we developed for integrating biological and abstract data models of the fruit fly brain. With powerful 3D graphical visualization, NeuroNLP presents a highly accessible portal for the fruit fly brain data. NeuroGFX provides users highly intuitive tools to execute neural circuit models with Neurokernel, an open-source platform for emulating the fruit fly brain, with full data support from the NeuroArch database and visualization support from an interactive graphical interface. Brain circuits can be configured with high flexibility and investigated on multiple levels, e.g., whole brain, neuropil, and local circuit levels. The FFBO is publicly available and accessible at http://fruitflybrain.org from any modern web browsers, including those running on smartphones.


Bone | 2016

Osteolytica: An automated image analysis software package that rapidly measures cancer-induced osteolytic lesions in in vivo models with greater reproducibility compared to other commonly used methods☆

H.R. Evans; Twin Karmakharm; Michelle A. Lawson; Rebecca E. Walker; W. Harris; C. Fellows; I.D. Huggins; Paul Richmond; Andrew D. Chantry

Methods currently used to analyse osteolytic lesions caused by malignancies such as multiple myeloma and metastatic breast cancer vary from basic 2-D X-ray analysis to 2-D images of micro-CT datasets analysed with non-specialised image software such as ImageJ. However, these methods have significant limitations. They do not capture 3-D data, they are time-consuming and they often suffer from inter-user variability. We therefore sought to develop a rapid and reproducible method to analyse 3-D osteolytic lesions in mice with cancer-induced bone disease. To this end, we have developed Osteolytica, an image analysis software method featuring an easy to use, step-by-step interface to measure lytic bone lesions. Osteolytica utilises novel graphics card acceleration (parallel computing) and 3-D rendering to provide rapid reconstruction and analysis of osteolytic lesions. To evaluate the use of Osteolytica we analysed tibial micro-CT datasets from murine models of cancer-induced bone disease and compared the results to those obtained using a standard ImageJ analysis method. Firstly, to assess inter-user variability we deployed four independent researchers to analyse tibial datasets from the U266-NSG murine model of myeloma. Using ImageJ, inter-user variability between the bones was substantial (± 19.6%), in contrast to using Osteolytica, which demonstrated minimal variability (± 0.5%). Secondly, tibial datasets from U266-bearing NSG mice or BALB/c mice injected with the metastatic breast cancer cell line 4T1 were compared to tibial datasets from aged and sex-matched non-tumour control mice. Analyses by both Osteolytica and ImageJ showed significant increases in bone lesion area in tumour-bearing mice compared to control mice. These results confirm that Osteolytica performs as well as the current 2-D ImageJ osteolytic lesion analysis method. However, Osteolytica is advantageous in that it analyses over the entirety of the bone volume (as opposed to selected 2-D images), it is a more rapid method and it has less user variability.

Collaboration


Dive into the Paul Richmond's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam Tomkins

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar

Alex Cope

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge