Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Brüderle is active.

Publication


Featured researches published by Daniel Brüderle.


Frontiers in Neuroinformatics | 2008

PyNN: A Common Interface for Neuronal Network Simulators

Andrew P. Davison; Daniel Brüderle; Jochen Martin Eppler; Jens Kremkow; Eilif Muller; Dejan Pecevski; Laurent Perrinet; Pierre Yger

Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN.


Frontiers in Neuroscience | 2013

Six networks on a universal neuromorphic computing substrate.

Thomas Pfeil; Andreas Grübl; Sebastian Jeltsch; Eric Müller; Paul Müller; Mihai A. Petrovici; Michael Schmuker; Daniel Brüderle; Johannes Schemmel; K. Meier

In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality.


international symposium on circuits and systems | 2007

Modeling Synaptic Plasticity within Networks of Highly Accelerated I&F Neurons

Johannes Schemmel; Daniel Brüderle; K. Meier; Boris Ostendorf

When studying the different aspects of synaptic plasticity, the timescales involved range from milliseconds to hours, thus covering at least seven orders of magnitude. To make this temporal dynamic range accessible to the experimentalist, we have developed a highly accelerated analog VLSI model of leaky integrate and fire neurons. It incorporates fast and slow synaptic facilitation and depression mechanisms in its conductance based synapses. By using a 180 nm process 105 synapses fit on a 25 mm2 die. A single chip can model the temporal evolution of the synaptic weights in networks of up to 384 neurons with an acceleration factor of 105 while recording the neural action potentials with a temporal resolution better than 30 mus biological time. This reduces the time needed for a 10 minute experiment to merely 6 ms, paving the way for complex parameter searches to reproduce biological findings. Due to a digital communication structure larger networks can be built from multiple chips while retaining an acceleration factor of a least 104.


Biological Cybernetics | 2011

A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems

Daniel Brüderle; Mihai A. Petrovici; Bernhard Vogginger; Matthias Ehrlich; Thomas Pfeil; Sebastian Millner; Andreas Grübl; Karsten Wendt; Eric Müller; Marc-Olivier Schwartz; Dan Husmann de Oliveira; Sebastian Jeltsch; Johannes Fieres; Moritz Schilling; Paul Müller; Oliver Breitwieser; Venelin Petkov; Lyle Muller; Andrew P. Davison; Pradeep Krishnamurthy; Jens Kremkow; Mikael Lundqvist; Eilif Muller; Johannes Partzsch; Stefan Scholze; Lukas Zühl; Christian Mayr; Alain Destexhe; Markus Diesmann; Tobias C. Potjans

In this article, we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware–software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results.


Frontiers in Neuroinformatics | 2009

Establishing a Novel Modeling Tool: A Python-based Interface for a Neuromorphic Hardware System

Daniel Brüderle; Eric Müller; Andrew P. Davison; Eilif Muller; Johannes Schemmel; K. Meier

Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.


Frontiers in Computational Neuroscience | 2010

Compensating Inhomogeneities of Neuromorphic VLSI Devices Via Short-Term Synaptic Plasticity

Johannes Bill; Klaus Schuch; Daniel Brüderle; Johannes Schemmel; Wolfgang Maass; K. Meier

Recent developments in neuromorphic hardware engineering make mixed-signal VLSI neural network models promising candidates for neuroscientific research tools and massively parallel computing devices, especially for tasks which exhaust the computing power of software simulations. Still, like all analog hardware systems, neuromorphic models suffer from a constricted configurability and production-related fluctuations of device characteristics. Since also future systems, involving ever-smaller structures, will inevitably exhibit such inhomogeneities on the unit level, self-regulation properties become a crucial requirement for their successful operation. By applying a cortically inspired self-adjusting network architecture, we show that the activity of generic spiking neural networks emulated on a neuromorphic hardware system can be kept within a biologically realistic firing regime and gain a remarkable robustness against transistor-level variations. As a first approach of this kind in engineering practice, the short-term synaptic depression and facilitation mechanisms implemented within an analog VLSI model of I&F neurons are functionally utilized for the purpose of network level stabilization. We present experimental data acquired both from the hardware model and from comparative software simulations which prove the applicability of the employed paradigm to neuromorphic VLSI devices.


international work-conference on artificial and natural neural networks | 2007

A software framework for tuning the dynamics of neuromorphic silicon towards biology

Daniel Brüderle; Andreas Grübl; K. Meier; Eilif Mueller; Johannes Schemmel

This paper presents configuration methods for an existing neuromorphic hardware and shows first experimental results. The utilized mixed-signal VLSI device implements a highly accelerated network of integrate-and-fire neurons. We present a software framework, which provides the possibility to interface the hardware and explore it from the point of view of neuroscience. It allows to directly compare both spike times and membrane potentials which are emulated by the hardware or are computed by the software simulator NEST, respectively, from within a single software scope. Membrane potential and spike timing dependent plasticity measurements are shown which illustrate the capabilities of the software framework and document the functionality of the chip.


genetic and evolutionary computation conference | 2004

On the Evolution of Analog Electronic Circuits Using Building Blocks on a CMOS FPTA

Jörg Langeheine; Martin A. Trefzer; Daniel Brüderle; K. Meier; Johannes Schemmel

This article summarizes two experiments utilizing building blocks to find analog electronic circuits on a CMOS Field Programmable Transistor Array (FPTA). The FPTA features 256 programmable transistors whose channel geometry and routing can be configured to form a large variety of transistor level analog circuits. The transistor cells are either of type PMOS or NMOS and are arranged in a checkerboard pattern. Two case studies focus on improving artificial evolution by using a building block library of four digital gates consisting of a NOR, a NAND, a buffer and an inverter. The methodology is applied to the design of the more complex logic gates XOR and XNOR as well as to the evolution of circuits discriminating between square waves of different frequencies.


international symposium on neural networks | 2009

High-conductance states on a neuromorphic hardware system

Bernhard Kaplan; Daniel Brüderle; Johannes Schemmel; K. Meier

Under typical synaptical stimulation, cortical neurons exhibit a total membrane conductance which, compared to a situation without any input spikes, is significantly increased. This results in a shorter membrane time constant and thus in an increased capability of the neuron to detect coincidences in its synaptic input. For this study, a neuromorphic hardware device was utilized, which does not provide direct access to its membrane conductances. Motivated by the aim of finding biologically realistic configuration regimes for the chip operation, a purely spike-based method for the estimation of membrane conductances is presented, allowing to test the hardware membrane dynamics. A proof of principle is given by pure software simulations. Hardware results are presented which illustrate the functionality of the method and show the possibility to generate high-conductance states in the utilized VLSI neurons. In the final section, limits and useful implications of the proposed method are discussed.


BMC Neuroscience | 2007

Verifying the biological relevance of a neuromorphic hardware device

Daniel Brüderle; K. Meier; Eilif Mueller; Johannes Schemmel

Background Within the FACETS research project, a neuromorphic mixed-signal VLSI device was created [1]. It was designed to exhibit a linear correspondence with an I&F neuron model, including synaptic plasticity and short-term synaptic dynamics. It operates with a speedup factor of around 105 compared to biological real time. Utilizing the existing prototype, networks of up to 384 neurons and the temporal evolution of the weights of 105 synapses under STDP can be modeled. Methods We developed a software framework which allows a unified access to both the hardware system and the pure software neuro-simulator NEST, providing the possibility to verify that the chip can be operated in a biologically realistic regime. From within a single software scope, we can compare and post-process results obtained from both systems, based on identical input and network setups.

Collaboration


Dive into the Daniel Brüderle's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

K. Meier

Heidelberg University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew P. Davison

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Bernhard Vogginger

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jens Kremkow

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Karsten Wendt

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Lukas Zühl

Dresden University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge