Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander D. Rast is active.

Publication


Featured researches published by Alexander D. Rast.


international symposium on neural networks | 2008

SpiNNaker: Mapping neural networks onto a massively-parallel chip multiprocessor

Muhammad Mukaram Khan; David R. Lester; Luis A. Plana; Alexander D. Rast; Xin Jin; Eustace Painkras; Stephen B. Furber

SpiNNaker is a novel chip - based on the ARM processor - which is designed to support large scale spiking neural networks simulations. In this paper we describe some of the features that permit SpiNNaker chips to be connected together to form scalable massively-parallel systems. Our eventual goal is to be able to simulate neural networks consisting of 109 neurons running in dasiareal timepsila, by which we mean that a similarly sized collection of biological neurons would run at the same speed. In this paper we describe the methods by which neural networks are mapped onto the system, and how features designed into the chip are to be exploited in practice. We will also describe the modelling and verification activities by which we hope to ensure that, when the chip is delivered, it will work as anticipated.


international symposium on neural networks | 2010

Implementing spike-timing-dependent plasticity on SpiNNaker neuromorphic hardware

Xin Jin; Alexander D. Rast; Francesco Galluppi; Sergio Davies; Steve B. Furber

This paper presents an efficient approach for implementing spike-timing-dependent plasticity (STDP) on the SpiNNaker neuromorphic hardware. The event-address mapping and the distributed synaptic weight storage schemes used in parallel neuromorphic hardware such as SpiNNaker make the conventional pre-post-sensitive scheme of STDP implementation inefficient, since STDP is triggered when either a pre- or post-synaptic neuron fires. An alternative pre-sensitive scheme approach is presented to solve this problem, where STDP is triggered only when a pre-synaptic neuron fires. An associated deferred event-driven model is developed to enable the pre-sensitive scheme by deferring the STDP process until there are sufficient history spike timing records. The paper gives detailed description of the implementation as well as performance estimation of STDP on multi-chip SpiNNaker machine, along with the discussion on some issues related to efficient STDP implementation on a parallel neuromorphic hardware.


international symposium on neural networks | 2008

Virtual synaptic interconnect using an asynchronous network-on-chip

Alexander D. Rast; Shufan Yang; Muhammad Mukaram Khan; Stephen B. Furber

Given the limited current understanding of the neural model of computation, hardware neural network architectures that impose a specific relationship between physical connectivity and model topology are likely to be overly restrictive. Here we introduce, in the SpiNNaker chip, an alternative approach: a mappable virtual topology using an asynchronous network-on-chip (NoC) that decouples the ldquologicalrdquo connectivity map from the physical wiring. Borrowing the established digital RAM model for synapses, we develop a concurrent memory access channel optimised for neural processing that allows each processing node to perform its own synaptic updates as if the synapses were local to the node. The highly concurrent nature of interconnect access, however, requires careful design of intermediate buffering and arbitration. We show here how a locally buffered, one-transaction-per-node model with multiple synapse updates per transaction enables the local node to offload continuous burst traffic from the NoC, allowing for a hardware-efficient design that supports biologically realistic speeds. The design not only presents a flexible model for neural connectivity but also suggests an ideal form for general-purpose high-performance on-chip interconnect.


computing frontiers | 2012

A hierachical configuration system for a massively parallel neural hardware platform

Francesco Galluppi; Sergio Davies; Alexander D. Rast; Thomas Sharp; Luis A. Plana; Steve B. Furber

Simulation of large networks of neurons is a powerful and increasingly prominent methodology for investigate brain functions and structures. Dedicated parallel hardware is a natural candidate for simulating the dynamic activity of many non-linear units communicating asynchronously. It is only scientifically useful, however, if the simulation tools can be configured and run easily and quickly. We present a method to map network models to computational nodes on the SpiNNaker system, a programmable parallel neurally-inspired hardware architecture, by exploiting the hierarchies built in the model. This PArtitioning and Configuration MANager (PACMAN) system supports arbitrary network topologies and arbitrary membrane potential and synapse dynamics, and (most importantly) decouples the model from the device, allowing a variety of languages (PyNN, Nengo, etc.) to drive the simulation hardware. Model representation operates on a Population/Projection level rather than a single-neuron and connection level, exploiting hierarchical properties to lower the complexity of allocating resources and mapping the model onto the system. PACMAN can be thus be used to generate structures coming from different models and front-ends, either with a host-based process, or by parallelising it on the SpiNNaker machine itself to speed up the generation process greatly. We describe the approach with a first implementation of the framework used to configure the current generation of SpiNNaker machines and present results from a set of key benchmarks. The system allows researchers to exploit dedicated simulation hardware which may otherwise be difficult to program. In effect, PACMAN provides automated hardware acceleration for some commonly used network simulators while also pointing towards the advantages of hierarchical configuration for large, domain-specific hardware systems.


Journal of Neuroscience Methods | 2012

Power-efficient simulation of detailed cortical microcircuits on SpiNNaker.

Thomas Sharp; Francesco Galluppi; Alexander D. Rast; Steve B. Furber

Computer simulation of neural matter is a promising methodology for understanding the function of the brain. Recent anatomical studies have mapped the intricate structure of cortex, and these data have been exploited in numerous simulations attempting to explain its function. However, the largest of these models run inconveniently slowly and require vast amounts of electrical power, which hinders useful experimentation. SpiNNaker is a novel computer architecture designed to address these problems using low-power microprocessors and custom communication hardware. We use four SpiNNaker chips (of a planned fifty thousand) to simulate, in real-time, a cortical circuit of ten thousand spiking neurons and four million synapses. In this simulation, the hardware consumes 100 nJ per neuron per millisecond and 43 nJ per postsynaptic potential, which is the smallest quantity reported for any digital computer. We argue that this approaches fast, power-feasible and scientifically useful simulations of large cortical areas.


international symposium on neural networks | 2010

The Leaky Integrate-and-Fire neuron: A platform for synaptic model exploration on the SpiNNaker chip

Alexander D. Rast; Francesco Galluppi; Xin Jin; Steve B. Furber

Large-scale neural hardware systems are trending increasingly towards the “neuromimetic” architecture: a general-purpose platform that specialises the hardware for neural networks but allows flexibility in model choice. Since the model is not hard-wired into the chip, exploration of different neural and synaptic models is not merely possible but provides a rich field for research: the possibility to use the hardware to establish useful abstractions of biological neural dynamics that could lead to a functional model of neural computation. Two areas of neural modelling stand out as central: 1) What level of detail in the neurodynamic model is necessary to achieve biologically realistic behaviour? 2) What is role and effect of different types of synapses in the computation? Using a universal event-driven neural chip, SpiNNaker, we develop a simple model, the Leaky-Integrate-and-Fire (LIF) neuron, as a tool for exploring the second of these questions, complementary to the existing Izhikevich model which allows exploration of the first of these questions. The LIF model permits the development of multiple synaptic models including fast AMPA/GABA-A synapses with or without STDP learning, and slow NMDA synapses, spanning a range of different dynamic time constants. Its simple dynamics make it possible to expand the complexity of synaptic response, while the general-purpose design of SpiNNaker makes it possible if necessary to increase the neurodynamic accuracy with Izhikevich (or even Hodgkin-Huxley) neurons with some tradeoff in model size. Furthermore, the LIF model is a universally-accepted “standard” neural model that provides a good basis for comparisons with software simulations and introduces minimal risk of obscuring important synaptic effects due to unusual neurodynamics. The simple models run thus far demonstrate the viability of both the LIF model and of various possible synaptic models on SpiNNaker and illustrate how it can be used as a platform for model exploration. Such an architecture provides a scalable system for high-performance large-scale neural modelling with complete freedom in model choice.


Neural Networks | 2011

2011 Special Issue: Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware

Alexander D. Rast; Francesco Galluppi; Sergio Davies; Luis A. Plana; Cameron Patterson; Thomas Sharp; David R. Lester; Steve B. Furber

Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNakers asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience.


international conference on neural information processing | 2010

A general-purpose model translation system for a universal neural chip

Francesco Galluppi; Alexander D. Rast; Sergio Davies; Steve B. Furber

This paper describes how an emerging standard neural network modelling language can be used to configure a general-purpose neural multi-chip system by describing the process of writing and loading neural network models on the SpiNNaker neuromimetic hardware. It focuses on the implementation of a SpiNNaker module for PyNN, a simulator-independent language for neural networks modelling. We successfully extend PyNN to deal with different non-standard (eg. Izhikevich) cell types, rapidly switch between them and load applications on a parallel hardware by orchestrating the software layers below it, so that they will be abstracted to the final user. Finally we run some simulations in PyNN and compare them against other simulators, successfully reproducing single neuron and network dynamics and validating the implementation.


international symposium on neural networks | 2010

Algorithm and software for simulation of spiking neural networks on the multi-chip SpiNNaker system

Xin Jin; Francesco Galluppi; Cameron Patterson; Alexander D. Rast; Sergio Davies; Steve Temple; Steve B. Furber

This paper presents the algorithm and software developed for parallel simulation of spiking neural networks on multiple SpiNNaker universal neuromorphic chips. It not only describes approaches to simulating neural network models, such as dynamics, neural representations, and synaptic delays, but also presents the software design of loading a neural application and initial a simulation on the multi-chip SpiNNaker system. A series of sub-issues are also investigated, such as neuron-processor allocation, synapses distribution, and route planning. The platform is verified by running spiking neural applications on both the SoC Designer model and the physical SpiNNaker Test Chip. This work sums the problems we have solved and highlights those requiring further investigations, and therefore it forms the foundation of the software design on SpiNNaker, leading the future development towards a universal platform for real-time simulations of extreme large-scale neural systems.


international conference on neural information processing | 2009

Implementing Learning on the SpiNNaker Universal Neural Chip Multiprocessor

Xin Jin; Alexander D. Rast; Francesco Galluppi; Muhammad Mukaram Khan; Steve B. Furber

Large-scale neural simulation requires high-performance hardware with on-chip learning. Using SpiNNaker, a universal neural network chip multiprocessor, we demonstrate an STDP implementation as an example of programmable on-chip learning for dedicated neural hardware. Using a scheme driven entirely by pre-synaptic spike events, we optimize both the data representation and processing for efficiency of implementation. The deferred-event model provides a reconfigurable timing record length to meet different accuracy requirements. Results demonstrate successful STDP within a multi-chip simulation containing 60 neurons and 240 synapses. This optimisable learning model illustrates the scalable general-purpose techniques essential for developing functional learning rules on general-purpose, parallel neural hardware.

Collaboration


Dive into the Alexander D. Rast's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis A. Plana

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Xin Jin

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sergio Davies

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mikel Luján

University of Manchester

View shared research outputs
Researchain Logo
Decentralizing Knowledge