Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steve B. Furber is active.

Publication


Featured researches published by Steve B. Furber.


Proceedings of the IEEE | 2014

The SpiNNaker Project

Steve B. Furber; Francesco Galluppi; Steve Temple; Luis A. Plana

The spiking neural network architecture (SpiNNaker) project aims to deliver a massively parallel million-core computer whose interconnect architecture is inspired by the connectivity characteristics of the mammalian brain, and which is suited to the modeling of large-scale spiking neural networks in biological real time. Specifically, the interconnect allows the transmission of a very large number of very small data packets, each conveying explicitly the source, and implicitly the time, of a single neural action potential or “spike.” In this paper, we review the current state of the project, which has already delivered systems with up to 2500 processors, and present the real-time event-driven programming model that supports flexible access to the resources of the machine and has enabled its use by a wide range of collaborators around the world.


IEEE Transactions on Computers | 2013

Overview of the SpiNNaker System Architecture

Steve B. Furber; David R. Lester; Luis A. Plana; Jim D. Garside; Eustace Painkras; Steve Temple; Andrew D. Brown

SpiNNaker (a contraction of Spiking Neural Network Architecture) is a million-core computing engine whose flagship goal is to be able to simulate the behavior of aggregates of up to a billion neurons in real time. It consists of an array of ARM9 cores, communicating via packets carried by a custom interconnect fabric. The packets are small (40 or 72 bits), and their transmission is brokered entirely by hardware, giving the overall engine an extremely high bisection bandwidth of over 5 billion packets/s. Three of the principal axioms of parallel machine design (memory coherence, synchronicity, and determinism) have been discarded in the design without, surprisingly, compromising the ability to perform meaningful computations. A further attribute of the system is the acknowledgment, from the initial design stages, that the sheer size of the implementation will make component failures an inevitable aspect of day-to-day operation, and fault detection and recovery mechanisms have been built into the system at many levels of abstraction. This paper describes the architecture of the machine and outlines the underlying design philosophy; software and applications are to be described in detail elsewhere, and only introduced in passing here as necessary to illuminate the description.


IEEE Journal of Solid-state Circuits | 2013

SpiNNaker: A 1-W 18-Core System-on-Chip for Massively-Parallel Neural Network Simulation

Eustace Painkras; Luis A. Plana; Jim D. Garside; Steve Temple; Francesco Galluppi; Cameron Patterson; David R. Lester; Andrew D. Brown; Steve B. Furber

The modelling of large systems of spiking neurons is computationally very demanding in terms of processing power and communication. SpiNNaker - Spiking Neural Network architecture - is a massively parallel computer system designed to provide a cost-effective and flexible simulator for neuroscience experiments. It can model up to a billion neurons and a trillion synapses in biological real time. The basic building block is the SpiNNaker Chip Multiprocessor (CMP), which is a custom-designed globally asynchronous locally synchronous (GALS) system with 18 ARM968 processor nodes residing in synchronous islands, surrounded by a lightweight, packet-switched asynchronous communications infrastructure. In this paper, we review the design requirements for its very demanding target application, the SpiNNaker micro-architecture and its implementation issues. We also evaluate the SpiNNaker CMP, which contains 100 million transistors in a 102-mm2 die, provides a peak performance of 3.96 GIPS, and has a peak power consumption of 1 W when all processor cores operate at the nominal frequency of 180 MHz. SpiNNaker chips are fully operational and meet their power and performance requirements.


Journal of the Royal Society Interface | 2007

Neural systems engineering

Steve B. Furber; Steve Temple

The quest to build an electronic computer based on the operational principles of biological brains has attracted attention over many years. The hope is that, by emulating the brain, it will be possible to capture some of its capabilities and thereby bridge the very large gulf that separates mankind from machines. At present, however, knowledge about the operational principles of the brain is far from complete, so attempts at emulation must employ a great deal of assumption and guesswork to fill the gaps in the experimental evidence. The sheer scale and complexity of the human brain still defies attempts to model it in its entirety at the neuronal level, but Moores Law is closing this gap and machines with the potential to emulate the brain (so far as we can estimate the computing power required) are no more than a decade or so away. Do computer engineers have something to contribute, alongside neuroscientists, psychologists, mathematicians and others, to the understanding of brain and mind, which remains as one of the great frontiers of science?


international symposium on neural networks | 2010

Implementing spike-timing-dependent plasticity on SpiNNaker neuromorphic hardware

Xin Jin; Alexander D. Rast; Francesco Galluppi; Sergio Davies; Steve B. Furber

This paper presents an efficient approach for implementing spike-timing-dependent plasticity (STDP) on the SpiNNaker neuromorphic hardware. The event-address mapping and the distributed synaptic weight storage schemes used in parallel neuromorphic hardware such as SpiNNaker make the conventional pre-post-sensitive scheme of STDP implementation inefficient, since STDP is triggered when either a pre- or post-synaptic neuron fires. An alternative pre-sensitive scheme approach is presented to solve this problem, where STDP is triggered only when a pre-synaptic neuron fires. An associated deferred event-driven model is developed to enable the pre-sensitive scheme by deferring the STDP process until there are sufficient history spike timing records. The paper gives detailed description of the implementation as well as performance estimation of STDP on multi-chip SpiNNaker machine, along with the discussion on some issues related to efficient STDP implementation on a parallel neuromorphic hardware.


international symposium on advanced research in asynchronous circuits and systems | 2000

AMULET3i-an asynchronous system-on-chip

Jim D. Garside; W. J. Bainbridge; A. Bardsley; D. M. Clark; D. A. Edwards; Steve B. Furber; Jianwei Liu; D. W. Lloyd; Siamak Mohammadi; J. S. Pepper; O. Petlin; Steven Temple; John V. Woods

AMULET3i is the third generation asynchronous ARM-compatible microprocessor subsystem developed at the University of Manchester. It is internally modular being based around the MARBLE asynchronous on-chip bus, and is also extensible through the addition of conventional clocked synthesizable peripherals via an on-chip synchronous peripheral bus. As such it is capable of forming the core of a wide range of system-on-chip applications, bringing asynchronous design into commercial use in a flexible and easy-to-use configuration. Its performance and area are comparable with clocked equivalents, and its low-power and electromagnetic emission characteristics give it unique capabilities in appropriate applications.


Archive | 1995

Computing without Clocks: Micropipelining the ARM Processor

Steve B. Furber

High-performance VLSI microprocessors are becoming very power hungry; this presents an increasing problem of heat removal in desk-top machines and of battery life in portable machines. Asynchronous operation is proposed as a route to more energy efficient computing. In his 1988 Turing Award Lecture, Ivan Sutherland proposed a modular approach to asynchronous design based on “Micropipelines”. The AMULET group at Manchester University has developed an asynchronous implementation of the ARM microprocessor based on micropipelines as part of a broad investigation into low power techniques. The design is described in detail, the rationale for the work is presented and the characteristics of the chip described. The first silicon from the design arrived in April 1994 and an evaluation of it is presented here.


Frontiers in Neuroscience | 2015

Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms

Evangelos Stromatias; Daniel Neil; Michael Pfeiffer; Francesco Galluppi; Steve B. Furber; Shih-Chii Liu

Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.


international conference on supercomputing | 2009

Understanding the interconnection network of SpiNNaker

Javier Navaridas; Mikel Luján; José Miguel-Alonso; Luis A. Plana; Steve B. Furber

SpiNNaker is a massively parallel architecture designed to model large-scale spiking neural networks in (biological) real-time. Its design is based around ad-hoc multi-core System-on-Chips which are interconnected using a two-dimensional toroidal triangular mesh. Neurons are modeled in software and their spikes generate packets that propagate through the on- and inter-chip communication fabric relying on custom-made on-chip multicast routers. This paper models and evaluates large-scale instances of its novel interconnect (more than 65 thousand nodes, or over one million computing cores), focusing on real-time features and fault-tolerance. The key contribution can be summarized as understanding the properties of the feasible topologies and establishing the stable operation of the SpiNNaker under different levels of degradation. First we derive analytically the topological characteristics of the network, which are later confirmed by experimental work. With the computational model developed, we investigate the topology of SpiNNaker, and compare it with a standard 3-dimensional torus. The novel emergency routing mechanism, implemented within the routers, allows the topology of SpiNNaker to be more robust than the 3-dimensional torus, regardless of the latter having better topological characteristics. Furthermore, we obtain optimal values of two router parameters related with livelock and deadlock avoidance mechanisms.


computing frontiers | 2012

A hierachical configuration system for a massively parallel neural hardware platform

Francesco Galluppi; Sergio Davies; Alexander D. Rast; Thomas Sharp; Luis A. Plana; Steve B. Furber

Simulation of large networks of neurons is a powerful and increasingly prominent methodology for investigate brain functions and structures. Dedicated parallel hardware is a natural candidate for simulating the dynamic activity of many non-linear units communicating asynchronously. It is only scientifically useful, however, if the simulation tools can be configured and run easily and quickly. We present a method to map network models to computational nodes on the SpiNNaker system, a programmable parallel neurally-inspired hardware architecture, by exploiting the hierarchies built in the model. This PArtitioning and Configuration MANager (PACMAN) system supports arbitrary network topologies and arbitrary membrane potential and synapse dynamics, and (most importantly) decouples the model from the device, allowing a variety of languages (PyNN, Nengo, etc.) to drive the simulation hardware. Model representation operates on a Population/Projection level rather than a single-neuron and connection level, exploiting hierarchical properties to lower the complexity of allocating resources and mapping the model onto the system. PACMAN can be thus be used to generate structures coming from different models and front-ends, either with a host-based process, or by parallelising it on the SpiNNaker machine itself to speed up the generation process greatly. We describe the approach with a first implementation of the framework used to configure the current generation of SpiNNaker machines and present results from a set of key benchmarks. The system allows researchers to exploit dedicated simulation hardware which may otherwise be difficult to program. In effect, PACMAN provides automated hardware acceleration for some commonly used network simulators while also pointing towards the advantages of hierarchical configuration for large, domain-specific hardware systems.

Collaboration


Dive into the Steve B. Furber's collaboration.

Top Co-Authors

Avatar

Luis A. Plana

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steve Temple

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Jim D. Garside

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sergio Davies

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xin Jin

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Mikel Luján

University of Manchester

View shared research outputs
Researchain Logo
Decentralizing Knowledge