Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Francesco Galluppi is active.

Publication


Featured researches published by Francesco Galluppi.


Proceedings of the IEEE | 2014

The SpiNNaker Project

Steve B. Furber; Francesco Galluppi; Steve Temple; Luis A. Plana

The spiking neural network architecture (SpiNNaker) project aims to deliver a massively parallel million-core computer whose interconnect architecture is inspired by the connectivity characteristics of the mammalian brain, and which is suited to the modeling of large-scale spiking neural networks in biological real time. Specifically, the interconnect allows the transmission of a very large number of very small data packets, each conveying explicitly the source, and implicitly the time, of a single neural action potential or “spike.” In this paper, we review the current state of the project, which has already delivered systems with up to 2500 processors, and present the real-time event-driven programming model that supports flexible access to the resources of the machine and has enabled its use by a wide range of collaborators around the world.


IEEE Journal of Solid-state Circuits | 2013

SpiNNaker: A 1-W 18-Core System-on-Chip for Massively-Parallel Neural Network Simulation

Eustace Painkras; Luis A. Plana; Jim D. Garside; Steve Temple; Francesco Galluppi; Cameron Patterson; David R. Lester; Andrew D. Brown; Steve B. Furber

The modelling of large systems of spiking neurons is computationally very demanding in terms of processing power and communication. SpiNNaker - Spiking Neural Network architecture - is a massively parallel computer system designed to provide a cost-effective and flexible simulator for neuroscience experiments. It can model up to a billion neurons and a trillion synapses in biological real time. The basic building block is the SpiNNaker Chip Multiprocessor (CMP), which is a custom-designed globally asynchronous locally synchronous (GALS) system with 18 ARM968 processor nodes residing in synchronous islands, surrounded by a lightweight, packet-switched asynchronous communications infrastructure. In this paper, we review the design requirements for its very demanding target application, the SpiNNaker micro-architecture and its implementation issues. We also evaluate the SpiNNaker CMP, which contains 100 million transistors in a 102-mm2 die, provides a peak performance of 3.96 GIPS, and has a peak power consumption of 1 W when all processor cores operate at the nominal frequency of 180 MHz. SpiNNaker chips are fully operational and meet their power and performance requirements.


international symposium on neural networks | 2010

Implementing spike-timing-dependent plasticity on SpiNNaker neuromorphic hardware

Xin Jin; Alexander D. Rast; Francesco Galluppi; Sergio Davies; Steve B. Furber

This paper presents an efficient approach for implementing spike-timing-dependent plasticity (STDP) on the SpiNNaker neuromorphic hardware. The event-address mapping and the distributed synaptic weight storage schemes used in parallel neuromorphic hardware such as SpiNNaker make the conventional pre-post-sensitive scheme of STDP implementation inefficient, since STDP is triggered when either a pre- or post-synaptic neuron fires. An alternative pre-sensitive scheme approach is presented to solve this problem, where STDP is triggered only when a pre-synaptic neuron fires. An associated deferred event-driven model is developed to enable the pre-sensitive scheme by deferring the STDP process until there are sufficient history spike timing records. The paper gives detailed description of the implementation as well as performance estimation of STDP on multi-chip SpiNNaker machine, along with the discussion on some issues related to efficient STDP implementation on a parallel neuromorphic hardware.


Frontiers in Neuroscience | 2015

Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms

Evangelos Stromatias; Daniel Neil; Michael Pfeiffer; Francesco Galluppi; Steve B. Furber; Shih-Chii Liu

Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.


international symposium on neural networks | 2013

Power analysis of large-scale, real-time neural networks on SpiNNaker

Evangelos Stromatias; Francesco Galluppi; Cameron Patterson; Stephen B. Furber

Simulating large spiking neural networks is non trivial: supercomputers offer great flexibility at the price of power and communication overheads; custom neuromorphic circuits are more power efficient but less flexible; while alternative approaches based on GPGPUs and FPGAs, whilst being more readily available, show similar model specialization. As well as efficiency and flexibility, real time simulation is a desirable neural network characteristic, for example in cognitive robotics where embodied agents interact with the environment using low-power, event-based neuromorphic sensors. The SpiNNaker neuromimetic architecture has been designed to address these requirements, simulating large-scale heterogeneous models of spiking neurons in real-time, offering a unique combination of flexibility, scalability and power efficiency. In this work a 48-chip board is utilised to generate a SpiNNaker power estimation model, based on numbers of neurons, synapses and their firing rates. In addition, we demonstrate simulations capable of handling up to a quarter of a million neurons, 81 million synapses and 1.8 billion synaptic events per second, with the most complex simulations consuming less than 1 Watt per SpiNNaker chip.


computing frontiers | 2012

A hierachical configuration system for a massively parallel neural hardware platform

Francesco Galluppi; Sergio Davies; Alexander D. Rast; Thomas Sharp; Luis A. Plana; Steve B. Furber

Simulation of large networks of neurons is a powerful and increasingly prominent methodology for investigate brain functions and structures. Dedicated parallel hardware is a natural candidate for simulating the dynamic activity of many non-linear units communicating asynchronously. It is only scientifically useful, however, if the simulation tools can be configured and run easily and quickly. We present a method to map network models to computational nodes on the SpiNNaker system, a programmable parallel neurally-inspired hardware architecture, by exploiting the hierarchies built in the model. This PArtitioning and Configuration MANager (PACMAN) system supports arbitrary network topologies and arbitrary membrane potential and synapse dynamics, and (most importantly) decouples the model from the device, allowing a variety of languages (PyNN, Nengo, etc.) to drive the simulation hardware. Model representation operates on a Population/Projection level rather than a single-neuron and connection level, exploiting hierarchical properties to lower the complexity of allocating resources and mapping the model onto the system. PACMAN can be thus be used to generate structures coming from different models and front-ends, either with a host-based process, or by parallelising it on the SpiNNaker machine itself to speed up the generation process greatly. We describe the approach with a first implementation of the framework used to configure the current generation of SpiNNaker machines and present results from a set of key benchmarks. The system allows researchers to exploit dedicated simulation hardware which may otherwise be difficult to program. In effect, PACMAN provides automated hardware acceleration for some commonly used network simulators while also pointing towards the advantages of hierarchical configuration for large, domain-specific hardware systems.


Journal of Neuroscience Methods | 2012

Power-efficient simulation of detailed cortical microcircuits on SpiNNaker.

Thomas Sharp; Francesco Galluppi; Alexander D. Rast; Steve B. Furber

Computer simulation of neural matter is a promising methodology for understanding the function of the brain. Recent anatomical studies have mapped the intricate structure of cortex, and these data have been exploited in numerous simulations attempting to explain its function. However, the largest of these models run inconveniently slowly and require vast amounts of electrical power, which hinders useful experimentation. SpiNNaker is a novel computer architecture designed to address these problems using low-power microprocessors and custom communication hardware. We use four SpiNNaker chips (of a planned fifty thousand) to simulate, in real-time, a cortical circuit of ten thousand spiking neurons and four million synapses. In this simulation, the hardware consumes 100 nJ per neuron per millisecond and 43 nJ per postsynaptic potential, which is the smallest quantity reported for any digital computer. We argue that this approaches fast, power-feasible and scientifically useful simulations of large cortical areas.


international symposium on neural networks | 2012

Real time on-chip implementation of dynamical systems with spiking neurons

Francesco Galluppi; Sergio Davies; Steve B. Furber; Terry C. Stewart; Chris Eliasmith

Simulation of large-scale networks of spiking neurons has become appealing for understanding the computational principles of the nervous system by producing models based on biological evidence. In particular, networks that can assume a variety of (dynamically) stable states have been proposed as the basis for different behavioural and cognitive functions. This work focuses on implementing the Neural Engineering Framework (NEF), a formal method for mapping attractor networks and control-theoretic algorithms to biologically plausible networks of spiking neurons, on the SpiNNaker system, a massive programmable parallel architecture oriented to the simulation of networks of spiking neurons. We describe how to encode and decode analog values to patterns of neural spikes directly on chip. These methods take advantage of the full programmability of the ARM968 cores constituting the processing base of a SpiNNaker node, and exploit the fast Network-on-chip for spike communication. In this paper we focus on the fundamentals of representing, transforming and implementing dynamics in spiking networks. We show real time simulation results demonstrating the NEF principles and discuss advantages, precision and scalability. More generally, the present approach can be used to state and test hypotheses with large-scale spiking neural network models for a range of different cognitive functions and behaviours.


Frontiers in Neuroscience | 2015

A framework for plasticity implementation on the SpiNNaker neural architecture

Francesco Galluppi; Xavier Lagorce; Evangelos Stromatias; Michael Pfeiffer; Luis A. Plana; Steve B. Furber; Ryad Benosman

Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system.


international symposium on neural networks | 2010

The Leaky Integrate-and-Fire neuron: A platform for synaptic model exploration on the SpiNNaker chip

Alexander D. Rast; Francesco Galluppi; Xin Jin; Steve B. Furber

Large-scale neural hardware systems are trending increasingly towards the “neuromimetic” architecture: a general-purpose platform that specialises the hardware for neural networks but allows flexibility in model choice. Since the model is not hard-wired into the chip, exploration of different neural and synaptic models is not merely possible but provides a rich field for research: the possibility to use the hardware to establish useful abstractions of biological neural dynamics that could lead to a functional model of neural computation. Two areas of neural modelling stand out as central: 1) What level of detail in the neurodynamic model is necessary to achieve biologically realistic behaviour? 2) What is role and effect of different types of synapses in the computation? Using a universal event-driven neural chip, SpiNNaker, we develop a simple model, the Leaky-Integrate-and-Fire (LIF) neuron, as a tool for exploring the second of these questions, complementary to the existing Izhikevich model which allows exploration of the first of these questions. The LIF model permits the development of multiple synaptic models including fast AMPA/GABA-A synapses with or without STDP learning, and slow NMDA synapses, spanning a range of different dynamic time constants. Its simple dynamics make it possible to expand the complexity of synaptic response, while the general-purpose design of SpiNNaker makes it possible if necessary to increase the neurodynamic accuracy with Izhikevich (or even Hodgkin-Huxley) neurons with some tradeoff in model size. Furthermore, the LIF model is a universally-accepted “standard” neural model that provides a good basis for comparisons with software simulations and introduces minimal risk of obscuring important synaptic effects due to unusual neurodynamics. The simple models run thus far demonstrate the viability of both the LIF model and of various possible synaptic models on SpiNNaker and illustrate how it can be used as a platform for model exploration. Such an architecture provides a scalable system for high-performance large-scale neural modelling with complete freedom in model choice.

Collaboration


Dive into the Francesco Galluppi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis A. Plana

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Sergio Davies

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Sharp

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Xin Jin

University of Manchester

View shared research outputs
Researchain Logo
Decentralizing Knowledge