Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mikael Djurfeldt is active.

Publication


Featured researches published by Mikael Djurfeldt.


Journal of Computational Neuroscience | 2007

Simulation of networks of spiking neurons: A review of tools and strategies

Romain Brette; Michelle Rudolph; Ted Carnevale; Michael L. Hines; David Beeman; James M. Bower; Markus Diesmann; Abigail Morrison; Philip H. Goodman; Frederick C. Harris; Milind Zirpe; Thomas Natschläger; Dejan Pecevski; Bard Ermentrout; Mikael Djurfeldt; Anders Lansner; Olivier Rochel; Thierry Viéville; Eilif Muller; Andrew P. Davison; Sami El Boustani; Alain Destexhe

We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin–Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.


Frontiers in Neuroinformatics | 2008

Large-Scale Modeling – a Tool for Conquering the Complexity of the Brain

Mikael Djurfeldt; Örjan Ekeberg; Anders Lansner

Is there any hope of achieving a thorough understanding of higher functions such as perception, memory, thought and emotion or is the stunning complexity of the brain a barrier which will limit such efforts for the foreseeable future? In this perspective we discuss methods to handle complexity, approaches to model building, and point to detailed large-scale models as a new contribution to the toolbox of the computational neuroscientist. We elucidate some aspects which distinguishes large-scale models and some of the technological challenges which they entail.


Ibm Journal of Research and Development | 2008

Brain-scale simulation of the neocortex on the IBM Blue Gene/L supercomputer

Mikael Djurfeldt; Mikael Lundqvist; Christopher Johansson; Martin Rehn; Örjan Ekeberg; Anders Lansner

Biologically detailed large-scale models of the brain can now be simulated thanks to increasingly powerful massively parallel supercomputers. We present an overview, for the general technical reader, of a neuronal network model of layers II/III of the neocortex built with biophysical model neurons. These simulations, carried out on an IBM Blue Gene/L™ supercomputer, comprise up to 22 million neurons and 11 billion synapses, which makes them the largest simulations of this type ever performed. Such model sizes correspond to the cortex of a small mammal. The SPLIT library, used for these simulations, runs on single-processor as well as massively parallel machines. Performance measurements show good scaling behavior on the Blue Gene/L supercomputer up to 8,192 processors. Several key phenomena seen in the living brain appear as emergent phenomena in the simulations. We discuss the role of this kind of model in neuroscience and note that full-scale models may be necessary to preserve natural dynamics. We also discuss the need for software tools for the specification of models as well as for analysis and visualization of output data. Combining models that range from abstract connectionist type to biophysically detailed will help us unravel the basic principles underlying neocortical function.


BMC Neuroscience | 2011

The Connection-set Algebra: a formalism for the representation of connectivity structure in neuronal network models, implementations in Python and C++, and their use in simulators

Mikael Djurfeldt

The connection-set algebra (CSA) [1,2] is a novel and general formalism for the description of connectivity in neuronal network models, from its small-scale to its large-scale structure. It provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. The CSA is expressive enough to describe a wide range of connectivities and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. Here, a Python implementation [4] of the connection-set algebra is presented together with its application to describing various network connectivity patterns. In addition, it is shown how CSA can be used to describe network models in the PyNN [5] and NineML [6] network model description languages.


IEEE Network | 2006

Attractor dynamics in a modular network model of neocortex

Mikael Lundqvist; Martin Rehn; Mikael Djurfeldt; Anders Lansner

Starting from the hypothesis that the mammalian neocortex to a first approximation functions as an associative memory of the attractor network type, we formulate a quantitative computational model of neocortical layers 2/3. The model employs biophysically detailed multi-compartmental model neurons with conductance based synapses and includes pyramidal cells and two types of inhibitory interneurons, i.e., regular spiking non-pyramidal cells and basket cells. The simulated network has a minicolumnar as well as a hypercolumnar modular structure and we propose that minicolumns rather than single cells are the basic computational units in neocortex. The minicolumns are represented in full scale and synaptic input to the different types of model neurons is carefully matched to reproduce experimentally measured values and to allow a quantitative reproduction of single cell recordings. Several key phenomena seen experimentally in vitro and in vivo appear as emergent features of this model. It exhibits a robust and fast attractor dynamics with pattern completion and pattern rivalry and it suggests an explanation for the so-called attentional blink phenomenon. During assembly dynamics, the model faithfully reproduces several features of local UP states, as they have been experimentally observed in vitro, as well as oscillatory behavior similar to that observed in the neocortex.


Neuroinformatics | 2010

Run-Time Interoperability Between Neuronal Network Simulators Based on the MUSIC Framework

Mikael Djurfeldt; Johannes Hjorth; Jochen Martin Eppler; Niraj Dudani; Moritz Helias; Tobias C. Potjans; Upinder S. Bhalla; Markus Diesmann; Jeanette Hellgren Kotaleski; Örjan Ekeberg

MUSIC is a standard API allowing large scale neuron simulators to exchange data within a parallel computer during runtime. A pilot implementation of this API has been released as open source. We provide experiences from the implementation of MUSIC interfaces for two neuronal network simulators of different kinds, NEST and MOOSE. A multi-simulation of a cortico-striatal network model involving both simulators is performed, demonstrating how MUSIC can promote inter-operability between models written for different simulators and how these can be re-used to build a larger model system. Benchmarks show that the MUSIC pilot implementation provides efficient data transfer in a cluster computer with good scaling. We conclude that MUSIC fulfills the design goal that it should be simple to adapt existing simulators to use MUSIC. In addition, since the MUSIC API enforces independence of the applications, the multi-simulation could be built from pluggable component modules without adaptation of the components to each other in terms of simulation time-step or topology of connections between the modules.


Molecular Brain Research | 1990

Distribution and cellular localization of DARPP-32 mRNA in rat brain

Martin Schalling; Mikael Djurfeldt; Tomas Hökfelt; Michelle Ehrlich; Tatsuya Kurihara; Paul Greengard

In situ hybridization histochemistry has been used to determine the regional distribution and cellular localization of DARPP-32 mRNA in the rat brain. Results support the concept that DARPP-32 is present primarily in cells expressing the dopamine D1 subtype receptor, and that DARPP-32 is not synthesized in dopamine-containing cells. Strongly labelled neuronal cell bodies were found in the caudate nucleus, nucleus accumbens, olfactory tubercle, parts of the bed nucleus of the stria terminalis, and the amygdaloid complex. In addition large amounts of DARPP-32 mRNA were visualized in the medial habenula and around the third ventricle, in ependymal cells and tanycytes, and in the cerebellar Purkinje cells. A less pronounced activity was seen in layers II-III and VI throughout the cerebral cortex. The present studies together with previous biochemical and immunocytochemical studies demonstrate that DARPP-32 gene expression is greatest primarily in D1 dopaminoceptive cells, although there are exceptions. In situ hybridization may thus be used to quantitate regulation of DARPP-32 mRNA in discrete brain regions.


Neurocomputing | 2001

Cortex-basal ganglia interaction and attractor states

Mikael Djurfeldt; Örjan Ekeberg; Ann M. Graybiel

We propose a set of hypotheses about how the basal ganglia contribute to information processing in cortical networks and how the cortex and basal ganglia interact during learning and behavior. We introduce a computational model on the level of system of networks. We suggest that the basal ganglia control cortical activity by pushing a local cortical network into a new attractor state, thereby selecting certain attractors over others. The ideas of temporal difference learning and convergence of corticostriatal fibers from multiple cortical areas within the striatum are combined in a modular learning system capable of acquiring behavior with sequential structure.


Neuroinformatics | 2012

The Connection-set Algebra—A Novel Formalism for the Representation of Connectivity Structure in Neuronal Network Models

Mikael Djurfeldt

The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31–42, 2008b) and an implementation in Python has been publicly released.


Network: Computation In Neural Systems | 2012

Creating, documenting and sharing network models

Sharon M. Crook; James A. Bednar; Sandra D Berger; Robert C. Cannon; Andrew P. Davison; Mikael Djurfeldt; Jochen Martin Eppler; Birgit Kriener; Steve B. Furber; Bruce P. Graham; Hans E. Plesser; Lars Schwabe; Leslie S. Smith; Volker Steuber; Sacha J. van Albada

As computational neuroscience matures, many simulation environments are available that are useful for neuronal network modeling. However, methods for successfully documenting models for publication and for exchanging models and model components among these projects are still under development. Here we briefly review existing software and applications for network model creation, documentation and exchange. Then we discuss a few of the larger issues facing the field of computational neuroscience regarding network modeling and suggest solutions to some of these problems, concentrating in particular on standardized network model terminology, notation, and descriptions and explicit documentation of model scaling. We hope this will enable and encourage computational neuroscientists to share their models more systematically in the future.

Collaboration


Dive into the Mikael Djurfeldt's collaboration.

Top Co-Authors

Avatar

Anders Lansner

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Örjan Ekeberg

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ekaterina Brocke

National Centre for Biological Sciences

View shared research outputs
Top Co-Authors

Avatar

Upinder S. Bhalla

National Centre for Biological Sciences

View shared research outputs
Top Co-Authors

Avatar

Michael Hanke

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew P. Davison

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Hans E. Plesser

Norwegian University of Life Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge