Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oliver Breitwieser is active.

Publication


Featured researches published by Oliver Breitwieser.


Biological Cybernetics | 2011

A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems

Daniel Brüderle; Mihai A. Petrovici; Bernhard Vogginger; Matthias Ehrlich; Thomas Pfeil; Sebastian Millner; Andreas Grübl; Karsten Wendt; Eric Müller; Marc-Olivier Schwartz; Dan Husmann de Oliveira; Sebastian Jeltsch; Johannes Fieres; Moritz Schilling; Paul Müller; Oliver Breitwieser; Venelin Petkov; Lyle Muller; Andrew P. Davison; Pradeep Krishnamurthy; Jens Kremkow; Mikael Lundqvist; Eilif Muller; Johannes Partzsch; Stefan Scholze; Lukas Zühl; Christian Mayr; Alain Destexhe; Markus Diesmann; Tobias C. Potjans

In this article, we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware–software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results.


PLOS ONE | 2014

Characterization and Compensation of Network-Level Anomalies in Mixed-Signal Neuromorphic Modeling Platforms

Mihai A. Petrovici; Bernhard Vogginger; Paul Müller; Oliver Breitwieser; Mikael Lundqvist; Lyle Muller; Matthias Ehrlich; Alain Destexhe; Anders Lansner; René Schüffny; Johannes Schemmel; K. Meier

Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.


international symposium on circuits and systems | 2017

Pattern representation and recognition with accelerated analog neuromorphic systems

Mihai A. Petrovici; Sebastian Schmitt; Johann Klähn; D. Stockel; A. Schroeder; Guillaume Bellec; Johannes Bill; Oliver Breitwieser; Ilja Bytschok; Andreas Grübl; Maurice Güttler; Andreas Hartel; Stephan Hartmann; Dan Husmann; Kai Husmann; Sebastian Jeltsch; Vitali Karasenko; Mitja Kleider; Christoph Koke; Alexander Kononov; Christian Mauch; Eric Müller; Paul Müller; Johannes Partzsch; Thomas Pfeil; Stefan Schiefer; Stefan Scholze; A. Subramoney; Vasilis Thanasoulis; Bernhard Vogginger

Despite being originally inspired by the central nervous system, artificial neural networks have diverged from their biological archetypes as they have been remodeled to fit, particular tasks. In this paper, we review several possibilites to reverse map these architectures to biologically more realistic spiking networks with the aim of emulating them on fast, low-power neuromorphic hardware. Since many of these devices employ analog components, which cannot, be perfectly controlled, finding ways to compensate for the resulting effects represents a key challenge. Here, we discuss three different, strategies to address this problem: the addition of auxiliary network components for stabilizing activity, the utilization of inherently robust, architectures and a training method for hardware-emulated networks that, functions without, perfect, knowledge of the systems dynamics and parameters. For all three scenarios, we corroborate our theoretical considerations with experimental results on accelerated analog neuromorphic platforms.


Scientific Reports | 2018

Spiking neurons with short-term synaptic plasticity form superior generative networks

Luziwei Leng; Roman Martel; Oliver Breitwieser; Ilja Bytschok; Walter Senn; Johannes Schemmel; K. Meier; Mihai A. Petrovici

Spiking networks that perform probabilistic inference have been proposed both as models of cortical computation and as candidates for solving problems in machine learning. However, the evidence for spike-based computation being in any way superior to non-spiking alternatives remains scarce. We propose that short-term synaptic plasticity can provide spiking networks with distinct computational advantages compared to their classical counterparts. When learning from high-dimensional, diverse datasets, deep attractors in the energy landscape often cause mixing problems to the sampling process. Classical algorithms solve this problem by employing various tempering techniques, which are both computationally demanding and require global state updates. We demonstrate how similar results can be achieved in spiking networks endowed with local short-term synaptic plasticity. Additionally, we discuss how these networks can even outperform tempering-based approaches when the training data is imbalanced. We thereby uncover a powerful computational property of the biologically inspired, local, spike-triggered synaptic dynamics based simply on a limited pool of synaptic resources, which enables them to deal with complex sensory data.


international symposium on neural networks | 2017

Robustness from structure: Inference with hierarchical spiking networks on analog neuromorphic hardware

Mihai A. Petrovici; Anna Schroeder; Oliver Breitwieser; Andreas Grübl; Johannes Schemmel; K. Meier

How spiking networks are able to perform probabilistic inference is an intriguing question, not only for understanding information processing in the brain, but also for transferring these computational principles to neuromorphic silicon circuits. A number of computationally powerful spiking network models have been proposed, but most of them have only been tested, under ideal conditions, in software simulations. Any implementation in an analog, physical system, be it in vivo or in silico, will generally lead to distorted dynamics due to the physical properties of the underlying substrate. In this paper, we discuss several such distortive effects that are difficult or impossible to remove by classical calibration routines or parameter training. We then argue that hierarchical networks of leaky integrate-and-fire neurons can offer the required robustness for physical implementation and demonstrate this with both software simulations and emulation on an accelerated analog neuromorphic device.


BMC Neuroscience | 2015

Deterministic neural networks as sources of uncorrelated noise for probabilistic computations

Jakob Jordan; Tom Tetzlaff; Mihai A. Petrovici; Oliver Breitwieser; Ilja Bytschok; Johannes Bill; Johannes Schemmel; K. Meier; Markus Diesmann

Neural-network models of brain function often rely on the presence of noise [1-4]. To date, the interplay of microscopic noise sources and network function is only poorly understood. In computer simulations and in neuromorphic hardware [5-7], the number of noise sources (random-number generators) is limited. In consequence, neurons in large functional network models have to share noise sources and are therefore correlated. In general, it is unclear how shared-noise correlations affect the performance of functional network models. Further, there is so far no solution to the problem of how a limited number of noise sources can supply a large number of functional units with uncorrelated noise. Here, we investigate the performance of neural Boltzmann machines [2-4]. We show that correlations in the background activity are detrimental to the sampling performance and that the deviations from the target distribution scale inversely with the number of noise sources. Further, we show that this problem can be overcome by replacing the finite ensemble of independent noise sources by a recurrent neural network with the same number of units. As shown recently, inhibitory feedback, abundant in biological neural networks, serves as a powerful decorrelation mechanism [8,9]: Shared-noise correlations are actively suppressed by the network dynamics. By exploiting this effect, the network performance is significantly improved. Hence, recurrent neural networks can serve as natural finite-size noise sources for functional neural networks, both in biological and in synthetic neuromorphic substrates. Finally we investigate the impact of sampling network parameters on its ability to faithfully represent a given well-defined distribution. We show that sampling networks with sufficiently strong negative feedback can intrinsically suppress correlations in the background activity, and thereby improve their performance substantially.


Archive | 2017

Nest 2.12.0

Susanne Kunkel; Rajalekshmi Deepu; Hans E. Plesser; Bruno Golosio; Mikkel Elle Lepperød; Jochen Martin Eppler; Sepehr Mahmoudian; Jan Hahne; Dimitri Plotnikov; Claudia Bachmann; Alexander Peyser; Tanguy Fardet; Till Schumann; Jakob Jordan; Ankur Sinha; Oliver Breitwieser; Abigail Morrison; Tammo Ippen; Hendrik Rothe; Steffen Graber; Hesam Setareh; Jesús Garrido; Dennis Terhorst; Alexey Shusharin; Hannah Bos; Arjun Rao; Alex Seeholzer; Mikael Djurfeldt; Maximilian Schmidt; Stine Brekke Vennemo


arXiv: Neurons and Cognition | 2018

Stochasticity from function - why the Bayesian brain may need no noise.

Dominik Dold; Ilja Bytschok; Akos F. Kungl; Andreas Baumbach; Oliver Breitwieser; Walter Senn; Johannes Schemmel; K. Meier; Mihai A. Petrovici


arXiv: Neural and Evolutionary Computing | 2018

Generative models on accelerated neuromorphic hardware.

Akos F. Kungl; Sebastian Schmitt; Johann Klähn; Paul Müller; Andreas Baumbach; Dominik Dold; Alexander Kugele; Nico Gürtler; Luziwei Leng; Eric Müller; Christoph Koke; Mitja Kleider; Christian Mauch; Oliver Breitwieser; Maurice Güttler; Dan Husmann de Oliveira; Kai Husmann; Joscha Ilmberger; Andreas Hartel; Vitali Karasenko; Andreas Grübl; Johannes Schemmel; K. Meier; Mihai A. Petrovici


Archive | 2017

Stochastic neural computation without noise

Jakob Jordan; Mihai A. Petrovici; Oliver Breitwieser; Johannes Schemmel; K. Meier; Markus Diesmann; Tom Tetzlaff

Collaboration


Dive into the Oliver Breitwieser's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

K. Meier

Heidelberg University

View shared research outputs
Top Co-Authors

Avatar

Jakob Jordan

Allen Institute for Brain Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bernhard Vogginger

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Tom Tetzlaff

Norwegian University of Life Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge