Susanne Kunkel
University of Freiburg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Susanne Kunkel.
Frontiers in Neuroinformatics | 2014
Susanne Kunkel; Maximilian Schmidt; Jochen Martin Eppler; Hans E. Plesser; Gen Masumoto; Jun Igarashi; Shin Ishii; Tomoki Fukai; Abigail Morrison; Markus Diesmann; Moritz Helias
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.
Frontiers in Neuroinformatics | 2010
Alexander Hanuschkin; Susanne Kunkel; Moritz Helias; Abigail Morrison; Markus Diesmann
Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a non-linear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision.
Frontiers in Computational Neuroscience | 2010
Susanne Kunkel; Markus Diesmann; Abigail Morrison
Spike-timing dependent plasticity (STDP) has traditionally been of great interest to theoreticians, as it seems to provide an answer to the question of how the brain can develop functional structure in response to repeated stimuli. However, despite this high level of interest, convincing demonstrations of this capacity in large, initially random networks have not been forthcoming. Such demonstrations as there are typically rely on constraining the problem artificially. Techniques include employing additional pruning mechanisms or STDP rules that enhance symmetry breaking, simulating networks with low connectivity that magnify competition between synapses, or combinations of the above. In this paper, we first review modeling choices that carry particularly high risks of producing non-generalizable results in the context of STDP in recurrent networks. We then develop a theory for the development of feed-forward structure in random networks and conclude that an unstable fixed point in the dynamics prevents the stable propagation of structure in recurrent networks with weight-dependent STDP. We demonstrate that the key predictions of the theory hold in large-scale simulations. The theory provides insight into the reasons why such development does not take place in unconstrained systems and enables us to identify biologically motivated candidate adaptations to the balanced random network model that might enable it.
Frontiers in Neuroinformatics | 2012
Moritz Helias; Susanne Kunkel; Gen Masumoto; Jun Igarashi; Jochen Martin Eppler; Shin Ishii; Tomoki Fukai; Abigail Morrison; Markus Diesmann
NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 108 neurons and 1012 synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience.
Frontiers in Neuroinformatics | 2012
Susanne Kunkel; Tobias C. Potjans; Jochen Martin Eppler; Hans E. Plesser; Abigail Morrison; Markus Diesmann
The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity, and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 105 neurons with up to 109 synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been investigated in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Blue Gene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of neuronal simulators as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place. As a consequence, development cycles can be shorter and less expensive. Applying the model to our freely available Neural Simulation Tool (NEST), we identify the software components dominant at different scales, and develop general strategies for reducing the memory consumption, in particular by using data structures that exploit the sparseness of the local representation of the network. We show that these adaptations enable our simulation software to scale up to the order of 10,000 processors and beyond. As memory consumption issues are likely to be relevant for any software dealing with complex connectome data on such architectures, our approach and our findings should be useful for researchers developing novel neuroinformatics solutions to the challenges posed by the connectome project.
Frontiers in Neuroinformatics | 2015
Jan Hahne; Moritz Helias; Susanne Kunkel; Jun Igarashi; Matthias Bolten; Andreas Frommer; Markus Diesmann
Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology.
International Workshop on Brain-Inspired Computing | 2013
S. J. van Albada; Susanne Kunkel; Abigail Morrison; Markus Diesmann
Large-scale simulations of neuronal networks provide a unique view onto brain dynamics, complementing experiments, small-scale simulations, and theory. They enable the investigation of integrative models to arrive at a multi-scale picture of brain dynamics relating macroscopic imaging measures to the microscopic dynamics. Recent years have seen rapid development of the necessary simulation technology. We give an overview of design features of the NEural Simulation Tool (NEST) that enable simulations of spiking point neurons to be scaled to hundreds of thousands of processors. The performance of supercomputing applications is traditionally assessed using scalability plots. We discuss reasons why such measures should be interpreted with care in the context of neural network simulations. The scalability of neural network simulations on available supercomputers is limited by memory constraints rather than computational speed. This calls for future generations of supercomputers that are more attuned to the requirements of memory-intensive neuroscientific applications.
BMC Neuroscience | 2010
Susanne Kunkel; Markus Diesmann; Abigail Morrison
Spike-timing dependent plasticity (STDP) has traditionally been of great interest to theoreticians, as it seems to provide an answer to the question of how the brain can develop functional structure in response to repeated stimuli. However, despite this high level of interest, convincing demonstrations of this capacity in large, initially random networks have not been forthcoming. Such demonstrations as there are typically rely on constraining the problem artificially. Techniques include employing additional pruning mechanisms or STDP rules that enhance symmetry breaking, simulating networks with low connectivity that magnify competition between synapses, or combinations of the above (see, e.g. [1-3]). Here, we describe a theory for the stimulus-driven development of feed-forward structures in random networks. The theory explains why the emergence of such structures does not take place in unconstrained systems [4] and enables us to identify candidate biologically motivated adaptations to the balanced random network model that might facilitate it. Finally, we investigate these candidate adaptations in large-scale simulations.
BMC Neuroscience | 2008
Alexander Hanuschkin; Susanne Kunkel; Moritz Helias; Abigail Morrison; Markus Diesmann
Discrete-time neuronal network simulation strategies typically constrain spike times to a grid determined by the computational step size. This approach can have the effect of introducing artificial synchrony [1]. However, timecontinuous approaches can be computationally demanding, both with respect to calculating future spike times and to event management, particularly for large network sizes. To address this problem, Morrison et al. [2] presented a general method of handling off-grid spiking in combination with exact subthreshold integration in discrete time driven simulations [3,4]. Within each time step an eventdriven environment is emulated to process incoming spikes, whereas the timing of outgoing spikes is based on interpolation. Therefore, the computation step size is a decisive factor for both integration error and simulation time.
BMC Neuroscience | 2013
Susanne Kunkel; Maximilian Schmidt; Jochen Martin Eppler; Hans E. Plesser; Jun Igarashi; Gen Masumoto; Tomoki Fukai; Shin Ishii; Abigail Morrison; Markus Diesmann; Moritz Helias
Over the last couple of years, supercomputers such as the Blue Gene/Q system JUQUEEN in Julich and the K computer in Kobe have become available for neuroscience research. These massively parallel systems open the field for a new class of scientific questions as they provide the resources to represent and simulate brain-scale networks, but they also confront the developers of simulation software with a new class of problems. Initial tests with our neuronal network simulator NEST [1] on JUGENE (the predecessor of JUQUEEN) revealed that in order to exploit the memory capacities of such machines, we needed to improve the parallelization of the fundamental data structures. To address this, we developed an analytical framework [2], which serves as a guideline for a systematic and iterative restructuring of the simulation kernel. In December 2012, the 3rd generation technology was released with NEST 2.2, which enables simulations of 108 neurons and 10,000 synapses per neuron on the K computer [3]. Even though the redesign of the fundamental data structures of NEST is driven by the demand for simulations of interacting brain areas, we do not aim at solutions tailored to a specific brain-scale model or computing architecture. Our goal is to maintain a single highly scalable code base that meets the requirements of such simulations whilst still performing well on modestly dimensioned lab clusters and even laptops. Here, we introduce the 4th generation simulation kernel and describe the development workflow that yielded the following three major improvements: the self-collapsing connection infrastructure, which takes up significantly less memory in the case of few local targets, the compacted node infrastructure, which causes only negligible constant serial memory overhead, and the reduced memory usage of synapse objects, which does not affect the precision of synaptic state variables. The improved code does not compromise on the general usability of NEST and will be merged into the common code base to be released with NEST 2.4. We show that with the 4g technology it will be possible to simulate networks of 109 neurons and 10,000 synapses per neuron on the K computer.