Mihai A. Petrovici
Heidelberg University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mihai A. Petrovici.
Frontiers in Neuroscience | 2013
Thomas Pfeil; Andreas Grübl; Sebastian Jeltsch; Eric Müller; Paul Müller; Mihai A. Petrovici; Michael Schmuker; Daniel Brüderle; Johannes Schemmel; K. Meier
In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality.
Biological Cybernetics | 2011
Daniel Brüderle; Mihai A. Petrovici; Bernhard Vogginger; Matthias Ehrlich; Thomas Pfeil; Sebastian Millner; Andreas Grübl; Karsten Wendt; Eric Müller; Marc-Olivier Schwartz; Dan Husmann de Oliveira; Sebastian Jeltsch; Johannes Fieres; Moritz Schilling; Paul Müller; Oliver Breitwieser; Venelin Petkov; Lyle Muller; Andrew P. Davison; Pradeep Krishnamurthy; Jens Kremkow; Mikael Lundqvist; Eilif Muller; Johannes Partzsch; Stefan Scholze; Lukas Zühl; Christian Mayr; Alain Destexhe; Markus Diesmann; Tobias C. Potjans
In this article, we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware–software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results.
BMC Neuroscience | 2015
Mihai A. Petrovici; Ilja Bytschok; Johannes Bill; Johannes Schemmel; K. Meier
The apparent stochasticity of in-vivo neural circuits has long been hypothesized to represent a signature of ongoing stochastic inference in the brain. More recently, a theoretical framework for neural sampling has been proposed, which explains how sample-based inference can be performed by networks of spiking neurons. One particular requirement of this approach is that the neural response function closely follows a logistic curve. nAnalytical approaches to calculating neural response functions have been the subject of many theoretical studies. In order to make the problem tractable, particular assumptions regarding the neural or synaptic parameters are usually made. However, biologically significant activity regimes exist which are not covered by these approaches: Under strong synaptic bombardment, as is often the case in cortex, the neuron is shifted into a high-conductance state (HCS) characterized by a small membrane time constant. In this regime, synaptic time constants and refractory periods dominate membrane dynamics. nThe core idea of our approach is to separately consider two different modes of spiking dynamics: burst spiking and transient quiescence, in which the neuron does not spike for longer periods. We treat the former by propagating the PDF of the effective membrane potential from spike to spike within a burst, while using a diffusion approximation for the latter. We find that our prediction of the neural response function closely matches simulation data. Moreover, in the HCS scenario, we show that the neural response function becomes symmetric and can be well approximated by a logistic function, thereby providing the correct dynamics in order to perform neural sampling. We hereby provide not only a normative framework for Bayesian inference in cortex, but also powerful applications of low-power, accelerated neuromorphic systems to relevant machine learning tasks.
PLOS ONE | 2014
Mihai A. Petrovici; Bernhard Vogginger; Paul Müller; Oliver Breitwieser; Mikael Lundqvist; Lyle Muller; Matthias Ehrlich; Alain Destexhe; Anders Lansner; René Schüffny; Johannes Schemmel; K. Meier
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.
Frontiers in Computational Neuroscience | 2015
Dimitri Probst; Mihai A. Petrovici; Ilja Bytschok; Johannes Bill; Dejan Pecevski; Johannes Schemmel; K. Meier
The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.
Physical Review E | 2016
Mihai A. Petrovici; Johannes Bill; Ilja Bytschok; Johannes Schemmel; K. Meier
The highly variable dynamics of neocortical circuits observed in vivo have been hypothesized to represent a signature of ongoing stochastic inference but stand in apparent contrast to the deterministic response of neurons measured in vitro. Based on a propagation of the membrane autocorrelation across spike bursts, we provide an analytical derivation of the neural activation function that holds for a large parameter space, including the high-conductance state. On this basis, we show how an ensemble of leaky integrate-and-fire neurons with conductance-based synapses embedded in a spiking environment can attain the correct firing statistics for sampling from a well-defined target distribution. For recurrent networks, we examine convergence toward stationarity in computer simulations and demonstrate sample-based Bayesian inference in a mixed graphical model. This points to a new computational role of high-conductance states and establishes a rigorous link between deterministic neuron models and functional stochastic dynamics on the network level.
international symposium on neural networks | 2017
Sebastian Schmitt; Johann Klähn; Guillaume Bellec; Andreas Grübl; Maurice Güttler; Andreas Hartel; Stephan Hartmann; Dan Husmann; Kai Husmann; Sebastian Jeltsch; Vitali Karasenko; Mitja Kleider; Christoph Koke; Alexander Kononov; Christian Mauch; Eric Müller; Paul Müller; Johannes Partzsch; Mihai A. Petrovici; Stefan Schiefer; Stefan Scholze; Vasilis Thanasoulis; Bernhard Vogginger; Robert A. Legenstein; Wolfgang Maass; Christian Mayr; René Schüffny; Johannes Schemmel; K. Meier
Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate. We first convert a deep neural network trained in software to a spiking network on the BrainScaleS wafer-scale neuromorphic system, thereby enabling an acceleration factor of 10000 compared to the biological time domain. This mapping is followed by the in-the-loop training, where in each training step, the network activity is first recorded in hardware and then used to compute the parameter updates in software via backpropagation. An essential finding is that the parameter updates do not have to be precise, but only need to approximately follow the correct gradient, which simplifies the computation of updates. Using this approach, after only several tens of iterations, the spiking network shows an accuracy close to the ideal software-emulated prototype. The presented techniques show that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the analog substrate.
international symposium on circuits and systems | 2017
Mihai A. Petrovici; Sebastian Schmitt; Johann Klähn; D. Stockel; A. Schroeder; Guillaume Bellec; Johannes Bill; Oliver Breitwieser; Ilja Bytschok; Andreas Grübl; Maurice Güttler; Andreas Hartel; Stephan Hartmann; Dan Husmann; Kai Husmann; Sebastian Jeltsch; Vitali Karasenko; Mitja Kleider; Christoph Koke; Alexander Kononov; Christian Mauch; Eric Müller; Paul Müller; Johannes Partzsch; Thomas Pfeil; Stefan Schiefer; Stefan Scholze; A. Subramoney; Vasilis Thanasoulis; Bernhard Vogginger
Despite being originally inspired by the central nervous system, artificial neural networks have diverged from their biological archetypes as they have been remodeled to fit, particular tasks. In this paper, we review several possibilites to reverse map these architectures to biologically more realistic spiking networks with the aim of emulating them on fast, low-power neuromorphic hardware. Since many of these devices employ analog components, which cannot, be perfectly controlled, finding ways to compensate for the resulting effects represents a key challenge. Here, we discuss three different, strategies to address this problem: the addition of auxiliary network components for stabilizing activity, the utilization of inherently robust, architectures and a training method for hardware-emulated networks that, functions without, perfect, knowledge of the systems dynamics and parameters. For all three scenarios, we corroborate our theoretical considerations with experimental results on accelerated analog neuromorphic platforms.
Scientific Reports | 2018
Luziwei Leng; Roman Martel; Oliver Breitwieser; Ilja Bytschok; Walter Senn; Johannes Schemmel; K. Meier; Mihai A. Petrovici
Spiking networks that perform probabilistic inference have been proposed both as models of cortical computation and as candidates for solving problems in machine learning. However, the evidence for spike-based computation being in any way superior to non-spiking alternatives remains scarce. We propose that short-term synaptic plasticity can provide spiking networks with distinct computational advantages compared to their classical counterparts. When learning from high-dimensional, diverse datasets, deep attractors in the energy landscape often cause mixing problems to the sampling process. Classical algorithms solve this problem by employing various tempering techniques, which are both computationally demanding and require global state updates. We demonstrate how similar results can be achieved in spiking networks endowed with local short-term synaptic plasticity. Additionally, we discuss how these networks can even outperform tempering-based approaches when the training data is imbalanced. We thereby uncover a powerful computational property of the biologically inspired, local, spike-triggered synaptic dynamics based simply on a limited pool of synaptic resources, which enables them to deal with complex sensory data.
international symposium on neural networks | 2017
Mihai A. Petrovici; Anna Schroeder; Oliver Breitwieser; Andreas Grübl; Johannes Schemmel; K. Meier
How spiking networks are able to perform probabilistic inference is an intriguing question, not only for understanding information processing in the brain, but also for transferring these computational principles to neuromorphic silicon circuits. A number of computationally powerful spiking network models have been proposed, but most of them have only been tested, under ideal conditions, in software simulations. Any implementation in an analog, physical system, be it in vivo or in silico, will generally lead to distorted dynamics due to the physical properties of the underlying substrate. In this paper, we discuss several such distortive effects that are difficult or impossible to remove by classical calibration routines or parameter training. We then argue that hierarchical networks of leaky integrate-and-fire neurons can offer the required robustness for physical implementation and demonstrate this with both software simulations and emulation on an accelerated analog neuromorphic device.