Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas Pfeil is active.

Publication


Featured researches published by Thomas Pfeil.


Frontiers in Neuroscience | 2013

Six networks on a universal neuromorphic computing substrate.

Thomas Pfeil; Andreas Grübl; Sebastian Jeltsch; Eric Müller; Paul Müller; Mihai A. Petrovici; Michael Schmuker; Daniel Brüderle; Johannes Schemmel; K. Meier

In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality.


Biological Cybernetics | 2011

A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems

Daniel Brüderle; Mihai A. Petrovici; Bernhard Vogginger; Matthias Ehrlich; Thomas Pfeil; Sebastian Millner; Andreas Grübl; Karsten Wendt; Eric Müller; Marc-Olivier Schwartz; Dan Husmann de Oliveira; Sebastian Jeltsch; Johannes Fieres; Moritz Schilling; Paul Müller; Oliver Breitwieser; Venelin Petkov; Lyle Muller; Andrew P. Davison; Pradeep Krishnamurthy; Jens Kremkow; Mikael Lundqvist; Eilif Muller; Johannes Partzsch; Stefan Scholze; Lukas Zühl; Christian Mayr; Alain Destexhe; Markus Diesmann; Tobias C. Potjans

In this article, we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware–software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results.


Frontiers in Neuroscience | 2012

Is a 4-Bit Synaptic Weight Resolution Enough? – Constraints on Enabling Spike-Timing Dependent Plasticity in Neuromorphic Hardware

Thomas Pfeil; Tobias C. Potjans; Sven Schrader; Wiebke Potjans; Johannes Schemmel; Markus Diesmann; K. Meier

Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists.


Proceedings of the National Academy of Sciences of the United States of America | 2014

A neuromorphic network for generic multivariate data classification

Michael Schmuker; Thomas Pfeil; Martin P. Nawrot

Significance One primary goal of computational neuroscience is to uncover fundamental principles of computations that are performed by the brain. In our work, we took direct inspiration from biology for a technical application of brain-like processing. We make use of neuromorphic hardware—electronic versions of neurons and synapses on a microchip—to implement a neural network inspired by the sensory processing architecture of the nervous system of insects. We demonstrate that this neuromorphic network achieves classification of generic multidimensional data—a widespread problem with many technical applications. Our work provides a proof of concept for using analog electronic microcircuits mimicking neurons to perform real-world computing tasks, and it describes the benefits and challenges of the neuromorphic approach. Computational neuroscience has uncovered a number of computational principles used by nervous systems. At the same time, neuromorphic hardware has matured to a state where fast silicon implementations of complex neural networks have become feasible. En route to future technical applications of neuromorphic computing the current challenge lies in the identification and implementation of functional brain algorithms. Taking inspiration from the olfactory system of insects, we constructed a spiking neural network for the classification of multivariate data, a common problem in signal and data analysis. In this model, real-valued multivariate data are converted into spike trains using “virtual receptors” (VRs). Their output is processed by lateral inhibition and drives a winner-take-all circuit that supports supervised learning. VRs are conveniently implemented in software, whereas the lateral inhibition and classification stages run on accelerated neuromorphic hardware. When trained and tested on real-world datasets, we find that the classification performance is on par with a naïve Bayes classifier. An analysis of the network dynamics shows that stable decisions in output neuron populations are reached within less than 100 ms of biological time, matching the time-to-decision reported for the insect nervous system. Through leveraging a population code, the network tolerates the variability of neuronal transfer functions and trial-to-trial variation that is inevitably present on the hardware system. Our work provides a proof of principle for the successful implementation of a functional spiking neural network on a configurable neuromorphic hardware system that can readily be applied to real-world computing problems.


international symposium on neural networks | 2013

Neuromorphic learning towards nano second precision

Thomas Pfeil; Anne-Christine Scherzer; Johannes Schemmel; K. Meier

Temporal coding is one approach to representing information in spiking neural networks. An example of its application is the location of sounds by barn owls that requires especially precise temporal coding. Dependent upon the azimuthal angle, the arrival times of sound signals are shifted between both ears. In order to determine these interaural time differences, the phase difference of the signals is measured. We implemented this biologically inspired network on a neuromorphic hardware system and demonstrate spike-timing dependent plasticity on an analog, highly accelerated hardware substrate. Our neuromorphic implementation enables the resolution of time differences of less than 50 ns. On-chip Hebbian learning mechanisms select inputs from a pool of neurons which code for the same sound frequency. Hence, noise caused by different synaptic delays across these inputs is reduced. Furthermore, learning compensates for variations on neuronal and synaptic parameters caused by device mismatch intrinsic to the neuromorphic substrate.


international symposium on circuits and systems | 2017

Pattern representation and recognition with accelerated analog neuromorphic systems

Mihai A. Petrovici; Sebastian Schmitt; Johann Klähn; D. Stockel; A. Schroeder; Guillaume Bellec; Johannes Bill; Oliver Breitwieser; Ilja Bytschok; Andreas Grübl; Maurice Güttler; Andreas Hartel; Stephan Hartmann; Dan Husmann; Kai Husmann; Sebastian Jeltsch; Vitali Karasenko; Mitja Kleider; Christoph Koke; Alexander Kononov; Christian Mauch; Eric Müller; Paul Müller; Johannes Partzsch; Thomas Pfeil; Stefan Schiefer; Stefan Scholze; A. Subramoney; Vasilis Thanasoulis; Bernhard Vogginger

Despite being originally inspired by the central nervous system, artificial neural networks have diverged from their biological archetypes as they have been remodeled to fit, particular tasks. In this paper, we review several possibilites to reverse map these architectures to biologically more realistic spiking networks with the aim of emulating them on fast, low-power neuromorphic hardware. Since many of these devices employ analog components, which cannot, be perfectly controlled, finding ways to compensate for the resulting effects represents a key challenge. Here, we discuss three different, strategies to address this problem: the addition of auxiliary network components for stabilizing activity, the utilization of inherently robust, architectures and a training method for hardware-emulated networks that, functions without, perfect, knowledge of the systems dynamics and parameters. For all three scenarios, we corroborate our theoretical considerations with experimental results on accelerated analog neuromorphic platforms.


Frontiers in Neuroscience | 2018

Deep Learning With Spiking Neurons: Opportunities and Challenges

Michael Pfeiffer; Thomas Pfeil

Spiking neural networks (SNNs) are inspired by information processing in biology, where sparse and asynchronous binary signals are communicated and processed in a massively parallel fashion. SNNs on neuromorphic hardware exhibit favorable properties such as low power consumption, fast inference, and event-driven information processing. This makes them interesting candidates for the efficient implementation of deep neural networks, the method of choice for many machine learning tasks. In this review, we address the opportunities that deep spiking networks offer and investigate in detail the challenges associated with training SNNs in a way that makes them competitive with conventional deep learning, but simultaneously allows for efficient mapping to hardware. A wide range of training methods for SNNs is presented, ranging from the conversion of conventional deep networks into SNNs, constrained training before conversion, spiking variants of backpropagation, and biologically motivated variants of STDP. The goal of our review is to define a categorization of SNN training methods, and summarize their advantages and drawbacks. We further discuss relationships between SNNs and binary networks, which are becoming popular for efficient digital hardware implementation. Neuromorphic hardware platforms have great potential to enable deep spiking networks in real-world applications. We compare the suitability of various neuromorphic systems that have been developed over the past years, and investigate potential use cases. Neuromorphic approaches and conventional machine learning should not be considered simply two solutions to the same classes of problems, instead it is possible to identify and exploit their task-specific advantages. Deep SNNs offer great opportunities to work with new types of event-based sensors, exploit temporal codes and local on-chip learning, and we have so far just scratched the surface of realizing these advantages in practical applications.


BMC Neuroscience | 2013

Classification of multivariate data with a spiking neural network on neuromorphic hardware

Michael Schmuker; Thomas Pfeil; Martin P. Nawrot

Progress in the field of computational systems neuroscience has uncovered a number of computational principles employed by nervous systems. At the same time neuromorphic hardware systems have evolved to a state where fast in silico implementations of complex neural networks are feasible. The current challenge is to identify and implement functional neural networks that enable neuromorphic computing to solve real world problems. Here, we present a generic spiking neural network for the supervised classification of multivariate data, a common problem in signal and data analysis. The network architecture was inspired by the data processing scheme of the olfactory system [1]. It has a three stage architecture. In the first stage, real-valued multivariate data is encoded into a bounded, positive firing-rate representation. The second stage removes correlation between input channels through lateral inhibition. Supervised training affects synaptic weights in the third stage, where classification of input patterns is performed. We implemented and tested our network on the Spikey neuromorphic hardware system [2]. Our network performed on the same level as a Naive Bayes classifier on several benchmark data sets. Our classifier network is an important proof-of-principle for a bio-inspired functional spiking network implemented on neuromorphic hardware performing a real-world computing task. This study was funded by the DFG (SCHM2474/1-1, MS), the BMBF (01GQ1001D, MS and MN, 01GQ0941, MN) and the European Union (243914, TP).


Physical Review X | 2016

Effect of Heterogeneity on Decorrelation Mechanisms in Spiking Neural Networks: A Neuromorphic-Hardware Study

Thomas Pfeil; Jakob Jordan; Andreas Grübl; Markus Diesmann; Johannes Schemmel; Tom Tetzlaff; K. Meier


Archive | 2016

Effect of heterogeneity on decorrelation mechanisms in spiking neural networks

Thomas Pfeil; Andreas Grübl; Johannes Schemmel; K. Meier

Collaboration


Dive into the Thomas Pfeil's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

K. Meier

Heidelberg University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge