Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sergio Davies is active.

Publication


Featured researches published by Sergio Davies.


international symposium on neural networks | 2010

Implementing spike-timing-dependent plasticity on SpiNNaker neuromorphic hardware

Xin Jin; Alexander D. Rast; Francesco Galluppi; Sergio Davies; Steve B. Furber

This paper presents an efficient approach for implementing spike-timing-dependent plasticity (STDP) on the SpiNNaker neuromorphic hardware. The event-address mapping and the distributed synaptic weight storage schemes used in parallel neuromorphic hardware such as SpiNNaker make the conventional pre-post-sensitive scheme of STDP implementation inefficient, since STDP is triggered when either a pre- or post-synaptic neuron fires. An alternative pre-sensitive scheme approach is presented to solve this problem, where STDP is triggered only when a pre-synaptic neuron fires. An associated deferred event-driven model is developed to enable the pre-sensitive scheme by deferring the STDP process until there are sufficient history spike timing records. The paper gives detailed description of the implementation as well as performance estimation of STDP on multi-chip SpiNNaker machine, along with the discussion on some issues related to efficient STDP implementation on a parallel neuromorphic hardware.


computing frontiers | 2012

A hierachical configuration system for a massively parallel neural hardware platform

Francesco Galluppi; Sergio Davies; Alexander D. Rast; Thomas Sharp; Luis A. Plana; Steve B. Furber

Simulation of large networks of neurons is a powerful and increasingly prominent methodology for investigate brain functions and structures. Dedicated parallel hardware is a natural candidate for simulating the dynamic activity of many non-linear units communicating asynchronously. It is only scientifically useful, however, if the simulation tools can be configured and run easily and quickly. We present a method to map network models to computational nodes on the SpiNNaker system, a programmable parallel neurally-inspired hardware architecture, by exploiting the hierarchies built in the model. This PArtitioning and Configuration MANager (PACMAN) system supports arbitrary network topologies and arbitrary membrane potential and synapse dynamics, and (most importantly) decouples the model from the device, allowing a variety of languages (PyNN, Nengo, etc.) to drive the simulation hardware. Model representation operates on a Population/Projection level rather than a single-neuron and connection level, exploiting hierarchical properties to lower the complexity of allocating resources and mapping the model onto the system. PACMAN can be thus be used to generate structures coming from different models and front-ends, either with a host-based process, or by parallelising it on the SpiNNaker machine itself to speed up the generation process greatly. We describe the approach with a first implementation of the framework used to configure the current generation of SpiNNaker machines and present results from a set of key benchmarks. The system allows researchers to exploit dedicated simulation hardware which may otherwise be difficult to program. In effect, PACMAN provides automated hardware acceleration for some commonly used network simulators while also pointing towards the advantages of hierarchical configuration for large, domain-specific hardware systems.


international symposium on neural networks | 2012

Real time on-chip implementation of dynamical systems with spiking neurons

Francesco Galluppi; Sergio Davies; Steve B. Furber; Terry C. Stewart; Chris Eliasmith

Simulation of large-scale networks of spiking neurons has become appealing for understanding the computational principles of the nervous system by producing models based on biological evidence. In particular, networks that can assume a variety of (dynamically) stable states have been proposed as the basis for different behavioural and cognitive functions. This work focuses on implementing the Neural Engineering Framework (NEF), a formal method for mapping attractor networks and control-theoretic algorithms to biologically plausible networks of spiking neurons, on the SpiNNaker system, a massive programmable parallel architecture oriented to the simulation of networks of spiking neurons. We describe how to encode and decode analog values to patterns of neural spikes directly on chip. These methods take advantage of the full programmability of the ARM968 cores constituting the processing base of a SpiNNaker node, and exploit the fast Network-on-chip for spike communication. In this paper we focus on the fundamentals of representing, transforming and implementing dynamics in spiking networks. We show real time simulation results demonstrating the NEF principles and discuss advantages, precision and scalability. More generally, the present approach can be used to state and test hypotheses with large-scale spiking neural network models for a range of different cognitive functions and behaviours.


Neural Networks | 2011

2011 Special Issue: Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware

Alexander D. Rast; Francesco Galluppi; Sergio Davies; Luis A. Plana; Cameron Patterson; Thomas Sharp; David R. Lester; Steve B. Furber

Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNakers asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience.


international conference on neural information processing | 2010

A general-purpose model translation system for a universal neural chip

Francesco Galluppi; Alexander D. Rast; Sergio Davies; Steve B. Furber

This paper describes how an emerging standard neural network modelling language can be used to configure a general-purpose neural multi-chip system by describing the process of writing and loading neural network models on the SpiNNaker neuromimetic hardware. It focuses on the implementation of a SpiNNaker module for PyNN, a simulator-independent language for neural networks modelling. We successfully extend PyNN to deal with different non-standard (eg. Izhikevich) cell types, rapidly switch between them and load applications on a parallel hardware by orchestrating the software layers below it, so that they will be abstracted to the final user. Finally we run some simulations in PyNN and compare them against other simulators, successfully reproducing single neuron and network dynamics and validating the implementation.


international symposium on neural networks | 2010

Algorithm and software for simulation of spiking neural networks on the multi-chip SpiNNaker system

Xin Jin; Francesco Galluppi; Cameron Patterson; Alexander D. Rast; Sergio Davies; Steve Temple; Steve B. Furber

This paper presents the algorithm and software developed for parallel simulation of spiking neural networks on multiple SpiNNaker universal neuromorphic chips. It not only describes approaches to simulating neural network models, such as dynamics, neural representations, and synaptic delays, but also presents the software design of loading a neural application and initial a simulation on the multi-chip SpiNNaker system. A series of sub-issues are also investigated, such as neuron-processor allocation, synapses distribution, and route planning. The platform is verified by running spiking neural applications on both the SoC Designer model and the physical SpiNNaker Test Chip. This work sums the problems we have solved and highlights those requiring further investigations, and therefore it forms the foundation of the software design on SpiNNaker, leading the future development towards a universal platform for real-time simulations of extreme large-scale neural systems.


international symposium on neural networks | 2012

Population-based routing in the SpiNNaker neuromorphic architecture

Sergio Davies; Javier Navaridas; Francesco Galluppi; Steve B. Furber

SpiNNaker is a hardware-based massively-parallel real-time universal neural network simulator designed to simulate large-scale spiking neural networks. Spikes are distributed across the system using a multicast packet router. Each packet represents an event (spike) generated by a neuron. On the basis of the source of the spike (chip, core and neuron), the routers distribute the network packet across the system towards the destination neuron(s). This paper describes a novel approach to the projection routing problem that shows advantages in both the size of the routing tables generated and the computational complexity for the generation of routing tables. To achieve this, spikes are routed on the basis of the source population, leaving to the destination core the duty to propagate the received spike to the appropriate neuron(s).


international symposium on neural networks | 2013

Spike-based learning of transfer functions with the SpiNNaker neuromimetic simulator

Sergio Davies; Terry C. Stewart; Chris Eliasmith; Steve B. Furber

Recent papers have shown the possibility to implement large scale neural network models that perform complex algorithms in a biologically realistic way. However, such models have been simulated on architectures unable to perform real-time simulations. In previous work we presented the possibility to simulate simple models in real-time on the SpiNNaker neuromimetic architecture. However, such models were “static”: the algorithm performed was defined at design-time. In this paper we present a novel learning rule, that exploits the peculiarities of the SpiNNaker system, enabling models designed with the Neural Engineering Framework (NEF) to learn transfer functions using a supervised framework. We show that the proposed learning rule, belonging to the Prescribed Error Sensitivity (PES) class, is able to learn, effectively, both linear and non-linear functions.


international symposium on neural networks | 2011

An event-driven model for the SpiNNaker virtual synaptic channel

Alexander D. Rast; Francesco Galluppi; Sergio Davies; Luis A. Plana; Thomas Sharp; Steve B. Furber

Neural networks present a fundamentally different model of computation from conventional sequential hardware, making it inefficient for very-large-scale models. Current neuromorphic devices do not yet offer a fully satisfactory solution even though they have improved simulation performance, in part because of fixed hardware, in part because of poor software support. SpiNNaker introduces a different approach, the “neuromimetic” architecture, that maintains the neural optimisation of dedicated chips while offering FPGA-like universal configurability. Central to this parallel multiprocessor is an asynchronous event-driven model that uses interrupt-generating dedicated hardware on the chip to support real-time neural simulation. In turn this requires an event-driven software model: a rethink as fundamental as that of the hardware. We examine this event-driven software model for an important hardware subsystem, the previously-introduced virtual synaptic channel. Using a scheduler-based system service architecture, the software can “hide” low-level processes and events from models so that the only event the model sees is “spike received”. Results from simulation on-chip demonstrate the robustness of the system even in the presence of extremely bursty, unpredictable traffic, but also expose important model-evel tradeoffs that are a consequence of the physical nature of the SpiNNaker chip. This event-driven subsystem is the first component of a library-based development system that allows the user to describe a model in a high-level neural description environment and be able to rely on a lower layer of system services to execute the model efficiently on SpiNNaker. Such a system realises a general-purpose platform that can generate an arbitrary neural network and run it with hardware speed and scale.


international symposium on neural networks | 2011

A forecast-based biologically-plausible STDP learning rule

Sergio Davies; Alexander D. Rast; Francesco Galluppi; Steve B. Furber

Spike Timing Dependent Plasticity (STDP) is a well known paradigm for learning in neural networks. In this paper we propose a new approach to this problem based on the standard STDP algorithm, with modifications and approximations, that relate the membrane potential with the LTP (Long Term Potentiation) part of the basic STDP rule. On the other side we use the standard STDP rule for the LTD (Long Term Depression) part of the algorithm. We show that on the basis of the membrane potential [5] it is possible to make a statistical prediction of the time needed by the neuron to reach the threshold, and therefore the LTP part of the STDP algorithm can be triggered when the neuron receives a spike.We present results that show the efficacy of this algorithm using one or more input patterns repeated over the whole time of the simulation. Through the approximations we suggest in this paper we introduce a learning rule that is easy to implement in simulators and reduces the execution time if compared with the standard STDP rule.

Collaboration


Dive into the Sergio Davies's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis A. Plana

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Sharp

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Xin Jin

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Alan B. Stokes

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Steve Temple

University of Manchester

View shared research outputs
Researchain Logo
Decentralizing Knowledge