Andreas Grübl
Heidelberg University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andreas Grübl.
international joint conference on neural network | 2006
Johannes Schemmel; Andreas Grübl; K. Meier; Eilif Mueller
This paper describes an area-efficient mixed-signal implementation of synapse-based long term plasticity realized in a VLSI model of a spiking neural network. The artificial synapses are based on an implementation of spike time dependent plasticity (STDP). In the biological specimen, STDP is a mechanism acting locally in each synapse. The presented electronic implementation succeeds in maintaining this high level of parallelism and simultaneously achieves a synapse density of more than 9k synapses per mm2 in a 180 nm technology. This allows the construction of neural micro-circuits close to the biological specimen while maintaining a speed several orders of magnitude faster than biological real time. The large acceleration factor enhances the possibilities to investigate key aspects of plasticity, e.g. by performing extensive parameter searches.
Frontiers in Neuroscience | 2013
Thomas Pfeil; Andreas Grübl; Sebastian Jeltsch; Eric Müller; Paul Müller; Mihai A. Petrovici; Michael Schmuker; Daniel Brüderle; Johannes Schemmel; K. Meier
In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality.
Biological Cybernetics | 2011
Daniel Brüderle; Mihai A. Petrovici; Bernhard Vogginger; Matthias Ehrlich; Thomas Pfeil; Sebastian Millner; Andreas Grübl; Karsten Wendt; Eric Müller; Marc-Olivier Schwartz; Dan Husmann de Oliveira; Sebastian Jeltsch; Johannes Fieres; Moritz Schilling; Paul Müller; Oliver Breitwieser; Venelin Petkov; Lyle Muller; Andrew P. Davison; Pradeep Krishnamurthy; Jens Kremkow; Mikael Lundqvist; Eilif Muller; Johannes Partzsch; Stefan Scholze; Lukas Zühl; Christian Mayr; Alain Destexhe; Markus Diesmann; Tobias C. Potjans
In this article, we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware–software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results.
international symposium on circuits and systems | 2012
Johannes Schemmel; Andreas Grübl; Stephan Hartmann; Alexander Kononov; Christian Mayr; K. Meier; Sebastian Millner; Johannes Partzsch; Stefan Schiefer; Stefan Scholze; René Schüffny; Marc-Olivier Schwartz
This demonstration is based on the wafer-scale neuromophic system presented in the previous papers by Schemmel et. al. (20120), Scholze et. al. (2011) and Millner et. al. (2010). The demonstration setup will allow the visitors to monitor and partially manipulate the neural events at every level. They will get an insight into the complex interplay between packet-based and realtime communication necessary to combine continuous-time mixed-signal neural networks with a packet-based transport network. Several network experiments implemented on the setup will be accessible for user interaction.
IEEE Transactions on Biomedical Circuits and Systems | 2017
Simon Friedmann; Johannes Schemmel; Andreas Grübl; Andreas Hartel; Matthias Hock; K. Meier
We present results from a new approach to learning and plasticity in neuromorphic hardware systems: to enable flexibility in implementable learning mechanisms while keeping high efficiency associated with neuromorphic implementations, we combine a general-purpose processor with full-custom analog elements. This processor is operating in parallel with a fully parallel neuromorphic system consisting of an array of synapses connected to analog, continuous time neuron circuits. Novel analog correlation sensor circuits process spike events for each synapse in parallel and in real-time. The processor uses this pre-processing to compute new weights possibly using additional information following its program. Therefore, to a certain extent, learning rules can be defined in software giving a large degree of flexibility. Synapses realize correlation detection geared towards Spike-Timing Dependent Plasticity (STDP) as central computational primitive in the analog domain. Operating at a speed-up factor of 1000 compared to biological time-scale, we measure time-constants from tens to hundreds of micro-seconds. We analyze variability across multiple chips and demonstrate learning using a multiplicative STDP rule. We conclude that the presented approach will enable flexible and efficient learning as a platform for neuroscientific research and technological applications.
international work-conference on artificial and natural neural networks | 2007
Stefan Philipp; Andreas Grübl; K. Meier; Johannes Schemmel
This paper presents a network architecture to interconnect mixed-signal VLSI integrate-and-fire neural networks in a way that the timing of the neural network data is preserved. The architecture uses isochronous connections to reserve network bandwidth and is optimized for the small data event packets that have to be exchanged in spiking hardware neural networks. End-to-end delay is reduced to the minimum by retaining 100% throughput. As buffering is avoided wherever possible, the resulting jitter is independent of the number of neural network chips used. This allows to experiment with neural networks of thousands of artificial neurons with a speedup of up to 105 compared to biology. Simulation results are presented. The work focuses on the interconnection of hardware neural networks. In addition to this, the proposed architecture is suitable for any application where bandwidth requirements are known and constant low delay is needed.
international work-conference on artificial and natural neural networks | 2007
Daniel Brüderle; Andreas Grübl; K. Meier; Eilif Mueller; Johannes Schemmel
This paper presents configuration methods for an existing neuromorphic hardware and shows first experimental results. The utilized mixed-signal VLSI device implements a highly accelerated network of integrate-and-fire neurons. We present a software framework, which provides the possibility to interface the hardware and explore it from the point of view of neuroscience. It allows to directly compare both spike times and membrane potentials which are emulated by the hardware or are computed by the software simulator NEST, respectively, from within a single software scope. Membrane potential and spike timing dependent plasticity measurements are shown which illustrate the capabilities of the software framework and document the functionality of the chip.
international symposium on neural networks | 2017
Sebastian Schmitt; Johann Klähn; Guillaume Bellec; Andreas Grübl; Maurice Güttler; Andreas Hartel; Stephan Hartmann; Dan Husmann; Kai Husmann; Sebastian Jeltsch; Vitali Karasenko; Mitja Kleider; Christoph Koke; Alexander Kononov; Christian Mauch; Eric Müller; Paul Müller; Johannes Partzsch; Mihai A. Petrovici; Stefan Schiefer; Stefan Scholze; Vasilis Thanasoulis; Bernhard Vogginger; Robert A. Legenstein; Wolfgang Maass; Christian Mayr; René Schüffny; Johannes Schemmel; K. Meier
Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate. We first convert a deep neural network trained in software to a spiking network on the BrainScaleS wafer-scale neuromorphic system, thereby enabling an acceleration factor of 10000 compared to the biological time domain. This mapping is followed by the in-the-loop training, where in each training step, the network activity is first recorded in hardware and then used to compute the parameter updates in software via backpropagation. An essential finding is that the parameter updates do not have to be precise, but only need to approximately follow the correct gradient, which simplifies the computation of updates. Using this approach, after only several tens of iterations, the spiking network shows an accuracy close to the ideal software-emulated prototype. The presented techniques show that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the analog substrate.
international symposium on circuits and systems | 2017
Mihai A. Petrovici; Sebastian Schmitt; Johann Klähn; D. Stockel; A. Schroeder; Guillaume Bellec; Johannes Bill; Oliver Breitwieser; Ilja Bytschok; Andreas Grübl; Maurice Güttler; Andreas Hartel; Stephan Hartmann; Dan Husmann; Kai Husmann; Sebastian Jeltsch; Vitali Karasenko; Mitja Kleider; Christoph Koke; Alexander Kononov; Christian Mauch; Eric Müller; Paul Müller; Johannes Partzsch; Thomas Pfeil; Stefan Schiefer; Stefan Scholze; A. Subramoney; Vasilis Thanasoulis; Bernhard Vogginger
Despite being originally inspired by the central nervous system, artificial neural networks have diverged from their biological archetypes as they have been remodeled to fit, particular tasks. In this paper, we review several possibilites to reverse map these architectures to biologically more realistic spiking networks with the aim of emulating them on fast, low-power neuromorphic hardware. Since many of these devices employ analog components, which cannot, be perfectly controlled, finding ways to compensate for the resulting effects represents a key challenge. Here, we discuss three different, strategies to address this problem: the addition of auxiliary network components for stabilizing activity, the utilization of inherently robust, architectures and a training method for hardware-emulated networks that, functions without, perfect, knowledge of the systems dynamics and parameters. For all three scenarios, we corroborate our theoretical considerations with experimental results on accelerated analog neuromorphic platforms.
international symposium on neural networks | 2017
Mihai A. Petrovici; Anna Schroeder; Oliver Breitwieser; Andreas Grübl; Johannes Schemmel; K. Meier
How spiking networks are able to perform probabilistic inference is an intriguing question, not only for understanding information processing in the brain, but also for transferring these computational principles to neuromorphic silicon circuits. A number of computationally powerful spiking network models have been proposed, but most of them have only been tested, under ideal conditions, in software simulations. Any implementation in an analog, physical system, be it in vivo or in silico, will generally lead to distorted dynamics due to the physical properties of the underlying substrate. In this paper, we discuss several such distortive effects that are difficult or impossible to remove by classical calibration routines or parameter training. We then argue that hierarchical networks of leaky integrate-and-fire neurons can offer the required robustness for physical implementation and demonstrate this with both software simulations and emulation on an accelerated analog neuromorphic device.