Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David R. Lester is active.

Publication


Featured researches published by David R. Lester.


IEEE Transactions on Computers | 2013

Overview of the SpiNNaker System Architecture

Steve B. Furber; David R. Lester; Luis A. Plana; Jim D. Garside; Eustace Painkras; Steve Temple; Andrew D. Brown

SpiNNaker (a contraction of Spiking Neural Network Architecture) is a million-core computing engine whose flagship goal is to be able to simulate the behavior of aggregates of up to a billion neurons in real time. It consists of an array of ARM9 cores, communicating via packets carried by a custom interconnect fabric. The packets are small (40 or 72 bits), and their transmission is brokered entirely by hardware, giving the overall engine an extremely high bisection bandwidth of over 5 billion packets/s. Three of the principal axioms of parallel machine design (memory coherence, synchronicity, and determinism) have been discarded in the design without, surprisingly, compromising the ability to perform meaningful computations. A further attribute of the system is the acknowledgment, from the initial design stages, that the sheer size of the implementation will make component failures an inevitable aspect of day-to-day operation, and fault detection and recovery mechanisms have been built into the system at many levels of abstraction. This paper describes the architecture of the machine and outlines the underlying design philosophy; software and applications are to be described in detail elsewhere, and only introduced in passing here as necessary to illuminate the description.


international symposium on neural networks | 2008

SpiNNaker: Mapping neural networks onto a massively-parallel chip multiprocessor

Muhammad Mukaram Khan; David R. Lester; Luis A. Plana; Alexander D. Rast; Xin Jin; Eustace Painkras; Stephen B. Furber

SpiNNaker is a novel chip - based on the ARM processor - which is designed to support large scale spiking neural networks simulations. In this paper we describe some of the features that permit SpiNNaker chips to be connected together to form scalable massively-parallel systems. Our eventual goal is to be able to simulate neural networks consisting of 109 neurons running in dasiareal timepsila, by which we mean that a similarly sized collection of biological neurons would run at the same speed. In this paper we describe the methods by which neural networks are mapped onto the system, and how features designed into the chip are to be exploited in practice. We will also describe the modelling and verification activities by which we hope to ensure that, when the chip is delivered, it will work as anticipated.


IEEE Journal of Solid-state Circuits | 2013

SpiNNaker: A 1-W 18-Core System-on-Chip for Massively-Parallel Neural Network Simulation

Eustace Painkras; Luis A. Plana; Jim D. Garside; Steve Temple; Francesco Galluppi; Cameron Patterson; David R. Lester; Andrew D. Brown; Steve B. Furber

The modelling of large systems of spiking neurons is computationally very demanding in terms of processing power and communication. SpiNNaker - Spiking Neural Network architecture - is a massively parallel computer system designed to provide a cost-effective and flexible simulator for neuroscience experiments. It can model up to a billion neurons and a trillion synapses in biological real time. The basic building block is the SpiNNaker Chip Multiprocessor (CMP), which is a custom-designed globally asynchronous locally synchronous (GALS) system with 18 ARM968 processor nodes residing in synchronous islands, surrounded by a lightweight, packet-switched asynchronous communications infrastructure. In this paper, we review the design requirements for its very demanding target application, the SpiNNaker micro-architecture and its implementation issues. We also evaluate the SpiNNaker CMP, which contains 100 million transistors in a 102-mm2 die, provides a peak performance of 3.96 GIPS, and has a peak power consumption of 1 W when all processor cores operate at the nominal frequency of 180 MHz. SpiNNaker chips are fully operational and meet their power and performance requirements.


IEEE Transactions on Computers | 2009

Verified Real Number Calculations: A Library for Interval Arithmetic

Marc Daumas; David R. Lester; Csar Muoz

Real number calculations on elementary functions are remarkably difficult to handle in mechanical proofs. In this paper, we show how these calculations can be performed within a theorem prover or proof assistant in a convenient and highly automated as well as interactive way. First, we formally establish upper and lower bounds for elementary functions. Then, based on these bounds, we develop a rational interval arithmetic where real number calculations take place in an algebraic setting. In order to reduce the dependency effect of interval arithmetic, we integrate two techniques: interval splitting and Taylor series expansions. This pragmatic approach has been developed, and formally verified, in a theorem prover. The formal development also includes a set of customizable strategies to automate proofs involving explicit calculations over real numbers. Our ultimate goal is to provide guaranteed proofs of numerical properties with minimal human theorem-prover interaction.


Journal of Functional Programming | 2006

FUNCTIONAL PEARL: Enumerating the rationals

Jeremy Gibbons; David R. Lester; Richard S. Bird

Every lazy functional programmer knows about the following approach to enumerating the positive rationals: generate a two-dimensional matrix (an infinite list of infinite lists), then traverse its finite diagonals (an infinite list of finite lists).


Software - Practice and Experience | 1991

A modular fully-lazy lambda lifter in Haskell

Simon L. Peyton Jones; David R. Lester

An important step in many compilers for functional languages is lambda lifting. In his thesis, Hughes showed that by doing lambda lifting in a particular way, a useful property called full laziness can be preserved. Full laziness has been seen as intertwined with lambda lifting ever since.


Neural Networks | 2011

2011 Special Issue: Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware

Alexander D. Rast; Francesco Galluppi; Sergio Davies; Luis A. Plana; Cameron Patterson; Thomas Sharp; David R. Lester; Steve B. Furber

Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNakers asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience.


formal methods | 2007

Topology in PVS: continuous mathematics with applications

David R. Lester

Topology can seem too abstract to be useful when first encountered. My aim in this paper is to show that --- on the contrary --- it is the vital building block for continuous mathematics. In particular, if a proof can be undertaken at the level of topology, it is almost always simpler there than when undertaken within the context of specific classes of topology such as those of metric spaces or Domain Theory.


symposium on computer arithmetic | 2001

Effective continued fractions

David R. Lester

Only the leading seven terms of a continued fraction are needed to perform on-line arithmetic, provided the continued fractions are of the correct form. This forms the basis of a proof that there is an effective representation of the computable reals as continued fractions; we also demonstrate that the basic arithmetic operations are computable using this representation.


CCA '00 Selected Papers from the 4th International Workshop on Computability and Complexity in Analysis | 2000

A Survey of Exact Arithmetic Implementations

Paul Gowland; David R. Lester

This paper provides a survey of practical systems for exact arithmetic. We describe some of the methods used in their implementation, and suggest reasons for the performance differences displayed by some of the competing systems at this years CCA Exact Arithmetic Competition.Because the practical aspects of the field of exact arithmetic are at an early stage, and many of the systems are prototypes, we have not discussed: portability, user-interfaces, and general usability. It is to be hoped that these aspects might be addressed by participants in any further competitions organised by the CCA committee.

Collaboration


Dive into the David R. Lester's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis A. Plana

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Alan B. Stokes

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew D. Brown

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Jim D. Garside

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Steve Temple

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Andrew Rowley

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Paul Gowland

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Marc Daumas

École normale supérieure de Lyon

View shared research outputs
Researchain Logo
Decentralizing Knowledge