Tobias C. Potjans
University of Freiburg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tobias C. Potjans.
Cerebral Cortex | 2014
Tobias C. Potjans; Markus Diesmann
In the past decade, the cell-type specific connectivity and activity of local cortical networks have been characterized experimentally to some detail. In parallel, modeling has been established as a tool to relate network structure to activity dynamics. While available comprehensive connectivity maps ( Thomson, West, et al. 2002; Binzegger et al. 2004) have been used in various computational studies, prominent features of the simulated activity such as the spontaneous firing rates do not match the experimental findings. Here, we analyze the properties of these maps to compile an integrated connectivity map, which additionally incorporates insights on the specific selection of target types. Based on this integrated map, we build a full-scale spiking network model of the local cortical microcircuit. The simulated spontaneous activity is asynchronous irregular and cell-type specific firing rates are in agreement with in vivo recordings in awake animals, including the low rate of layer 2/3 excitatory cells. The interplay of excitation and inhibition captures the flow of activity through cortical layers after transient thalamic stimulation. In conclusion, the integration of a large body of the available connectivity data enables us to expose the dynamical consequences of the cortical microcircuitry.
Biological Cybernetics | 2011
Daniel Brüderle; Mihai A. Petrovici; Bernhard Vogginger; Matthias Ehrlich; Thomas Pfeil; Sebastian Millner; Andreas Grübl; Karsten Wendt; Eric Müller; Marc-Olivier Schwartz; Dan Husmann de Oliveira; Sebastian Jeltsch; Johannes Fieres; Moritz Schilling; Paul Müller; Oliver Breitwieser; Venelin Petkov; Lyle Muller; Andrew P. Davison; Pradeep Krishnamurthy; Jens Kremkow; Mikael Lundqvist; Eilif Muller; Johannes Partzsch; Stefan Scholze; Lukas Zühl; Christian Mayr; Alain Destexhe; Markus Diesmann; Tobias C. Potjans
In this article, we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware–software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results.
Neuroinformatics | 2010
Mikael Djurfeldt; Johannes Hjorth; Jochen Martin Eppler; Niraj Dudani; Moritz Helias; Tobias C. Potjans; Upinder S. Bhalla; Markus Diesmann; Jeanette Hellgren Kotaleski; Örjan Ekeberg
MUSIC is a standard API allowing large scale neuron simulators to exchange data within a parallel computer during runtime. A pilot implementation of this API has been released as open source. We provide experiences from the implementation of MUSIC interfaces for two neuronal network simulators of different kinds, NEST and MOOSE. A multi-simulation of a cortico-striatal network model involving both simulators is performed, demonstrating how MUSIC can promote inter-operability between models written for different simulators and how these can be re-used to build a larger model system. Benchmarks show that the MUSIC pilot implementation provides efficient data transfer in a cluster computer with good scaling. We conclude that MUSIC fulfills the design goal that it should be simple to adapt existing simulators to use MUSIC. In addition, since the MUSIC API enforces independence of the applications, the multi-simulation could be built from pluggable component modules without adaptation of the components to each other in terms of simulation time-step or topology of connections between the modules.
Frontiers in Neuroscience | 2012
Thomas Pfeil; Tobias C. Potjans; Sven Schrader; Wiebke Potjans; Johannes Schemmel; Markus Diesmann; K. Meier
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists.
Frontiers in Computational Neuroscience | 2011
Nobuhiko Wagatsuma; Tobias C. Potjans; Markus Diesmann; Tomoki Fukai
A vast amount of information about the external world continuously flows into the brain, whereas its capacity to process such information is limited. Attention enables the brain to allocate its resources of information processing to selected sensory inputs for reducing its computational load, and effects of attention have been extensively studied in visual information processing. However, how the microcircuit of the visual cortex processes attentional information from higher areas remains largely unknown. Here, we explore the complex interactions between visual inputs and an attentional signal in a computational model of the visual cortical microcircuit. Our model not only successfully accounts for previous experimental observations of attentional effects on visual neuronal responses, but also predicts contrasting differences in the attentional effects of top-down signals between cortical layers: attention to a preferred stimulus of a column enhances neuronal responses of layers 2/3 and 5, the output stations of cortical microcircuits, whereas attention suppresses neuronal responses of layer 4, the input station of cortical microcircuits. We demonstrate that the specific modulation pattern of layer-4 activity, which emerges from inter-laminar synaptic connections, is crucial for a rapid shift of attention to a currently unattended stimulus. Our results suggest that top-down signals act differently on different layers of the cortical microcircuit.
Frontiers in Neuroinformatics | 2012
Susanne Kunkel; Tobias C. Potjans; Jochen Martin Eppler; Hans E. Plesser; Abigail Morrison; Markus Diesmann
The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity, and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 105 neurons with up to 109 synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been investigated in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Blue Gene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of neuronal simulators as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place. As a consequence, development cycles can be shorter and less expensive. Applying the model to our freely available Neural Simulation Tool (NEST), we identify the software components dominant at different scales, and develop general strategies for reducing the memory consumption, in particular by using data structures that exploit the sparseness of the local representation of the network. We show that these adaptations enable our simulation software to scale up to the order of 10,000 processors and beyond. As memory consumption issues are likely to be relevant for any software dealing with complex connectome data on such architectures, our approach and our findings should be useful for researchers developing novel neuroinformatics solutions to the challenges posed by the connectome project.
BMC Neuroscience | 2011
Henrik Lindén; Tom Tetzlaff; Tobias C. Potjans; Klas H. Pettersen; Sonja Grün; Markus Diesmann; Gaute T. Einevoll
The local field potential (LFP), usually referring to the low-frequency part of an extracellularly recorded potential (< 500 Hz), is nowadays routinely measured together with the spiking activity. The LFP is commonly believed to mainly reflect synaptic activity in a local population surrounding the electrode [1] but how large this population is, i.e. how many neurons contribute to the signal, is still debated. In this modeling study we investigate which factors influence the spatial summation of contributions that generate the LFP signal. A better understanding of this is crucial for a correct interpretation of the LFP, especially when analyzing multiple LFP signals recorded simultaneously at different cortical sites. We use a simplified two-dimensional model of a cortical population of neurons where the LFP is constructed as a weighted sum of signal contributions from all cells within a certain radial distance to the recording electrode. First we consider a general formulation of the model: if the single-cell LFP contributions can be viewed as current dipole sources [2], the single-cell amplitude will decay as 1/r2 with distance r to the electrode. On the other hand, for the two-dimensional geometry considered here, the number of neurons at a given distance increases linearly with r. In addition to these two opposed scaling factors the amplitude of the summed LFP signal also depends on how correlated the single-cell LFP sources are. We calculate the LFP amplitude as a function of the population radius and relate it to the above factors. We show that if the single-cell contributions decay as dipole sources or more steeply with distance, and if the sources are uncorrelated, the LFP is originating from a small local population. Cells outside of this population do not contribute to the LFP. If, however, the different LFP sources are uniformly correlated, cells at any distance contribute substantially to the LFP amplitude. In this case the LFP reach is only limited by the size of the region of correlated sources. This result highlights that the spatial region of the LFP is not fixed; rather it changes with the dynamics of the underlying synaptic activity. We further validate these results through LFP simulations of morphologically reconstructed cortical cells [2-4] where we study the effects of neuronal morphology on the size of the region contributing to the LFP. Finally, we show the laminar dependence of the reach measure used here and discuss potential implications of the interpretation of experimentally recorded LFPs.
BMC Neuroscience | 2011
Rembrandt Bakker; Tobias C. Potjans; Thomas Wachtler; Markus Diesmann
CoCoMac (cocomac.org) is a large data base on structural connectivity in the Macaque brain, based on over 450 published axonal tracing studies [1]. This huge curation effort took place under the guidance of Rolf Kotter, and provided data to numerous brain network analysis and modeling studies. While working on a major new release of the database, Rolf Kotter sadly passed away in June 2010. We are committed to foster the further development of CoCoMac and are gradually releasing the newly developed web interface at the CoCoMac 2.0 server hosted at the German INCF Node (cocomac.g-node.org). Macaque structural connectivity data is relevant for uncovering the large-scale human connectome and increasingly plays a role in constraining neuronal network models that link the connectivity structure to activity dynamics and network function. With recent advances in computer hardware and simulation software [2], brain-scale simulations with millions of spiking neurons become feasible, accounting simultaneously for the macroscopic and the microscopic structure of cortical networks. These models promote the integration of local micro circuitry and long-range connectivity data on the level of the cell-type specificity of connections. Such level of detail cannot be provided by diffusion MRI-based techniques: for layer-specificity, intracortical resolution and directionality, one has to fall back to axonal tracing experiments. Combining these into a complete picture for the entire brain is an enormous challenge, as it involves experiments that have been measured in thousands of individual brains over the course of a century. Since the spatial coordinates of tracing injections are undocumented in most publications, CoCoMac has had no other choice than to describe connectivity completely in terms of named brain regions: ‘region A has axonal projections to region B’. A nomenclature mapping service known as ORT [3] translates named brain regions in older brain atlases to newer ones and vice versa. Producing correct nomenclature mappings is of crucial importance. Incorrect or imprecise use of nomenclature in the literature leads to conflicting (chains of) mapping statements, and to errors in the resulting connectivity. CoCoMac 2.0 detects for each mapping statement whether conflicting versions exist, and uses Bayesian reasoning to eliminate the inconsistent literature statements causing the conflicts. For data exchange with MRI-based techniques, CoCoMac needs to attach spatial coordinates to its connectivity data. This is achieved by integrating CoCoMac with the INCF Scalable Brain Atlas (SBA, scalablebrainatlas.incf.org/cocomac) [4]. This web-based tool interactively displays structural connectivity in a spatial reference framework, and supports a number of commonly used brain atlases. The SBA provides a point-and-click interface to the low level text-based services at the CoCoMac server.
BMC Neuroscience | 2009
Henrik Lindén; Klas H. Pettersen; Tom Tetzlaff; Tobias C. Potjans; Michael Denker; Markus Diesmann; Sonja Grün; Gaute T. Einevoll
Joint extracellular recordings of cortical spiking activity and mesoscopic population signals, such as the local field potential (LFP), are becoming increasingly popular as a tool for measuring cortical activity. The LFP, the low-frequency part of extracellular potentials, is thought to mainly reflect dendritic transmembrane currents following synaptic activity in the vicinity of the recording electrode. As the LFP signal stems from the population activity of a large number of cells, it represents a more robust measure of the network dynamics than single cell recordings. However, despite recent modeling [1] and experimental [2-4] studies on the origin of the LFP, much of the nature of this signal still remains to be understood. In particular, the literature has contradicting reports on how large the cortical area represented in the signal from an LFP electrode is. While some studies claim a spatial range as large as several millimeters [4], others suggest a much more local origin of LFP fluctuations [2]. A possible reason for this apparent discrepancy is that, in real brain measurements, is difficult to disentangle LFP correlations between nearby recording sites due to signal conduction from intrinsic correlations in the generators of the signal.
BMC Neuroscience | 2009
Tobias C. Potjans; Tomoki Fukai; Markus Diesmann
The local cortical network consists of specifically interconnected neuronal populations (see [1] for review). This microcircuitry determines the possible interactions between neurons and thus may play a crucial role in shaping neuronal activity. We investigate the dynamical implications of the specificity of connections in the local network by means of large-scale simulations [2] of a spiking layered network model. To this end, we quantify the specificity of connections measured by diverse experimental techniques.