Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cameron Patterson is active.

Publication


Featured researches published by Cameron Patterson.


IEEE Journal of Solid-state Circuits | 2013

SpiNNaker: A 1-W 18-Core System-on-Chip for Massively-Parallel Neural Network Simulation

Eustace Painkras; Luis A. Plana; Jim D. Garside; Steve Temple; Francesco Galluppi; Cameron Patterson; David R. Lester; Andrew D. Brown; Steve B. Furber

The modelling of large systems of spiking neurons is computationally very demanding in terms of processing power and communication. SpiNNaker - Spiking Neural Network architecture - is a massively parallel computer system designed to provide a cost-effective and flexible simulator for neuroscience experiments. It can model up to a billion neurons and a trillion synapses in biological real time. The basic building block is the SpiNNaker Chip Multiprocessor (CMP), which is a custom-designed globally asynchronous locally synchronous (GALS) system with 18 ARM968 processor nodes residing in synchronous islands, surrounded by a lightweight, packet-switched asynchronous communications infrastructure. In this paper, we review the design requirements for its very demanding target application, the SpiNNaker micro-architecture and its implementation issues. We also evaluate the SpiNNaker CMP, which contains 100 million transistors in a 102-mm2 die, provides a peak performance of 3.96 GIPS, and has a peak power consumption of 1 W when all processor cores operate at the nominal frequency of 180 MHz. SpiNNaker chips are fully operational and meet their power and performance requirements.


international symposium on neural networks | 2013

Power analysis of large-scale, real-time neural networks on SpiNNaker

Evangelos Stromatias; Francesco Galluppi; Cameron Patterson; Stephen B. Furber

Simulating large spiking neural networks is non trivial: supercomputers offer great flexibility at the price of power and communication overheads; custom neuromorphic circuits are more power efficient but less flexible; while alternative approaches based on GPGPUs and FPGAs, whilst being more readily available, show similar model specialization. As well as efficiency and flexibility, real time simulation is a desirable neural network characteristic, for example in cognitive robotics where embodied agents interact with the environment using low-power, event-based neuromorphic sensors. The SpiNNaker neuromimetic architecture has been designed to address these requirements, simulating large-scale heterogeneous models of spiking neurons in real-time, offering a unique combination of flexibility, scalability and power efficiency. In this work a 48-chip board is utilised to generate a SpiNNaker power estimation model, based on numbers of neurons, synapses and their firing rates. In addition, we demonstrate simulations capable of handling up to a quarter of a million neurons, 81 million synapses and 1.8 billion synaptic events per second, with the most complex simulations consuming less than 1 Watt per SpiNNaker chip.


custom integrated circuits conference | 2012

SpiNNaker: A multi-core System-on-Chip for massively-parallel neural net simulation

Eustace Painkras; Luis A. Plana; Jim D. Garside; Steve Temple; Simon Davidson; Jeffrey Pepper; David M. Clark; Cameron Patterson; Steve B. Furber

The modelling of large systems of spiking neurons is computationally very demanding in terms of processing power and communication. SpiNNaker is a massively-parallel computer system designed to model up to a billion spiking neurons in real time. The basic block of the machine is the SpiNNaker multicore System-on-Chip, a Globally Asynchronous Locally Synchronous (GALS) system with 18 ARM968 processor nodes residing in synchronous islands, surrounded by a light-weight, packet-switched asynchronous communications infrastructure. The MPSoC contains 100 million transistors in a 102 mm2 die, provides a peak performance of 3.96 GIPS and has a power consumption of 1W at 1.2V when all processor cores operate at nominal frequency. SpiNNaker chips were delivered in May 2011, were fully operational, and met power and performance requirements.


Neural Networks | 2011

2011 Special Issue: Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware

Alexander D. Rast; Francesco Galluppi; Sergio Davies; Luis A. Plana; Cameron Patterson; Thomas Sharp; David R. Lester; Steve B. Furber

Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNakers asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience.


international symposium on neural networks | 2010

Algorithm and software for simulation of spiking neural networks on the multi-chip SpiNNaker system

Xin Jin; Francesco Galluppi; Cameron Patterson; Alexander D. Rast; Sergio Davies; Steve Temple; Steve B. Furber

This paper presents the algorithm and software developed for parallel simulation of spiking neural networks on multiple SpiNNaker universal neuromorphic chips. It not only describes approaches to simulating neural network models, such as dynamics, neural representations, and synaptic delays, but also presents the software design of loading a neural application and initial a simulation on the multi-chip SpiNNaker system. A series of sub-issues are also investigated, such as neuron-processor allocation, synapses distribution, and route planning. The platform is verified by running spiking neural applications on both the SoC Designer model and the physical SpiNNaker Test Chip. This work sums the problems we have solved and highlights those requiring further investigations, and therefore it forms the foundation of the software design on SpiNNaker, leading the future development towards a universal platform for real-time simulations of extreme large-scale neural systems.


Journal of Parallel and Distributed Computing | 2012

Scalable communications for a million-core neural processing architecture

Cameron Patterson; Jim D. Garside; Eustace Painkras; Steve Temple; Luis A. Plana; Javier Navaridas; Thomas Sharp; Steve B. Furber

The design of a new high-performance computing platform to model biological neural networks requires scalable, layered communications in both hardware and software. SpiNNakers hardware is based upon Multi-Processor System-on-Chips (MPSoCs) with flexible, power-efficient, custom communication between processors and chips. The architecture scales from a single 18-processor chip to over 1 million processors and to simulations of billion-neuron, trillion-synapse models, with tens of trillions of neural spike-event packets conveyed each second. The communication networks and overlying protocols are key to the successful operation of the SpiNNaker architecture, designed together to maximise performance and minimise the power demands of the platform. SpiNNaker is a work in progress, having recently reached a major milestone with the delivery of the first MPSoCs. This paper presents the architectural justification, which is now supported by preliminary measured results of silicon performance, indicating that it is indeed scalable to a million-plus processor system.


International Journal of Parallel Programming | 2012

Managing burstiness and scalability in event-driven models on the spinnaker neuromimetic system

Alexander D. Rast; Javier Navaridas; Xin Jin; Francesco Galluppi; Luis A. Plana; José Miguel-Alonso; Cameron Patterson; Mikel Luján; Steve B. Furber

Neural networks present a fundamentally different model of computation from the conventional sequential digital model, for which conventional hardware is typically poorly matched. However, a combination of model and scalability limitations has meant that neither dedicated neural chips nor FPGA’s have offered an entirely satisfactory solution. SpiNNaker introduces a different approach, the “neuromimetic” architecture, that maintains the neural optimisation of dedicated chips while offering FPGA-like universal configurability. This parallel multiprocessor employs an asynchronous event-driven model that uses interrupt-generating dedicated hardware on the chip to support real-time neural simulation. Nonetheless, event handling, particularly packet servicing, requires careful and innovative design in order to avoid local processor congestion and possible deadlock. We explore the impact that spatial locality, temporal causality and burstiness of traffic have on network performance, using tunable, biologically similar synthetic traffic patterns. Having established the viability of the system for real-time operation, we use two exemplar neural models to illustrate how to implement efficient event-handling service routines that mitigate the problem of burstiness in the traffic. Extending work published in ACM Computing Frontiers 2010 with on-chip testing, simulation results indicate the viability of SpiNNaker for large-scale neural modelling, while emphasizing the need for effective burst management and network mapping. Ultimately, the goal is the creation of a library-based development system that can translate a high-level neural model from any description environment into an efficient SpiNNaker instantiation. The complete system represents a general-purpose platform that can generate an arbitrary neural network and run it with hardware speed and scale.


parallel computing | 2013

SpiNNaker: Fault tolerance in a power- and area- constrained large-scale neuromimetic architecture

Javier Navaridas; Steve B. Furber; Jim D. Garside; Xin Jin; Mukaram Khan; David R. Lester; Mikel Luján; José Miguel-Alonso; Eustace Painkras; Cameron Patterson; Luis A. Plana; Alexander D. Rast; Dominic Richards; Yebin Shi; Steve Temple; Jian Wu; Shufan Yang

SpiNNaker is a biologically-inspired massively-parallel computer designed to model up to a billion spiking neurons in real-time. A full-fledged implementation of a SpiNNaker system will comprise more than 105 integrated circuits (half of which are SDRAMs and half multi-core systems-on-chip). Given this scale, it is unavoidable that some components fail and, in consequence, fault-tolerance is a foundation of the system design. Although the target application can tolerate a certain, low level of failures, important efforts have been devoted to incorporate different techniques for fault tolerance. This paper is devoted to discussing how hardware and software mechanisms collaborate to make SpiNNaker operate properly even in the very likely scenario of component failures and how it can tolerate system-degradation levels well above those expected.


international symposium on neural networks | 2011

Distributed configuration of massively-parallel simulation on SpiNNaker neuromorphic hardware

Thomas Sharp; Cameron Patterson; Steve B. Furber

SpiNNaker is a massively-parallel neuromorphic computing architecture designed to model very large, biologically plausible spiking neural networks in real-time. A SpiNNaker machine consists of up to 216 homogeneous eighteen-core multiprocessor chips, each with an on-board router which forms links with neighbouring chips for packet-switched interprocessor communications. The architecture is designed for dynamic reconfiguration and optimised for transmission of neural activity data, which presents a challenge for machine configuration, program loading and simulation monitoring given a lack of globally-shared memory resources, intrinsic addressing mode or sideband configuration channel. We propose distributed software mechanisms to address these problems and present experiments which demonstrate the necessity of this approach in contrast to centralised mechanisms.


Frontiers in Neural Circuits | 2014

Engineering a thalamo-cortico-thalamic circuit on SpiNNaker: a preliminary study toward modeling sleep and wakefulness

Basabdatta Sen Bhattacharya; Cameron Patterson; Francesco Galluppi; Simon J. Durrant; Steve B. Furber

We present a preliminary study of a thalamo-cortico-thalamic (TCT) implementation on SpiNNaker (Spiking Neural Network architecture), a brain inspired hardware platform designed to incorporate the inherent biological properties of parallelism, fault tolerance and energy efficiency. These attributes make SpiNNaker an ideal platform for simulating biologically plausible computational models. Our focus in this work is to design a TCT framework that can be simulated on SpiNNaker to mimic dynamical behavior similar to Electroencephalogram (EEG) time and power-spectra signatures in sleep-wake transition. The scale of the model is minimized for simplicity in this proof-of-concept study; thus the total number of spiking neurons is ≈1000 and represents a “mini-column” of the thalamocortical tissue. All data on model structure, synaptic layout and parameters is inspired from previous studies and abstracted at a level that is appropriate to the aims of the current study as well as computationally suitable for model simulation on a small 4-chip SpiNNaker system. The initial results from selective deletion of synaptic connectivity parameters in the model show similarity with EEG power spectra characteristics of sleep and wakefulness. These observations provide a positive perspective and a basis for future implementation of a very large scale biologically plausible model of thalamo-cortico-thalamic interactivity—the essential brain circuit that regulates the biological sleep-wake cycle and associated EEG rhythms.

Collaboration


Dive into the Cameron Patterson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis A. Plana

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Steve Temple

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Xin Jin

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jim D. Garside

University of Manchester

View shared research outputs
Researchain Logo
Decentralizing Knowledge