Jun Igarashi
RIKEN Brain Science Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jun Igarashi.
Frontiers in Neuroinformatics | 2014
Susanne Kunkel; Maximilian Schmidt; Jochen Martin Eppler; Hans E. Plesser; Gen Masumoto; Jun Igarashi; Shin Ishii; Tomoki Fukai; Abigail Morrison; Markus Diesmann; Moritz Helias
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.
The Journal of Neuroscience | 2013
Jun Igarashi; Yoshikazu Isomura; Kensuke Arai; Rie Harukuni; Tomoki Fukai
Sequential motor behavior requires a progression of discrete preparation and execution states. However, the organization of state-dependent activity in neuronal ensembles of motor cortex is poorly understood. Here, we recorded neuronal spiking and local field potential activity from rat motor cortex during reward-motivated movement and observed robust behavioral state-dependent coordination between neuronal spiking, γ oscillations, and θ oscillations. Slow and fast γ oscillations appeared during distinct movement states and entrained neuronal firing. γ oscillations, in turn, were coupled to θ oscillations, and neurons encoding different behavioral states fired at distinct phases of θ in a highly layer-dependent manner. These findings indicate that θ and nested dual band γ oscillations serve as the temporal structure for the selection of a conserved set of functional channels in motor cortical layer activity during animal movement. Furthermore, these results also suggest that cross-frequency couplings between oscillatory neuronal ensemble activities are part of the general coding mechanism in cortex.
Frontiers in Neuroinformatics | 2012
Moritz Helias; Susanne Kunkel; Gen Masumoto; Jun Igarashi; Jochen Martin Eppler; Shin Ishii; Tomoki Fukai; Abigail Morrison; Markus Diesmann
NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 108 neurons and 1012 synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience.
Frontiers in Neuroinformatics | 2015
Jan Hahne; Moritz Helias; Susanne Kunkel; Jun Igarashi; Matthias Bolten; Andreas Frommer; Markus Diesmann
Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology.
Frontiers in Neuroinformatics | 2018
Jakob Jordan; Jun Igarashi; Markus Diesmann; Tammo Ippen; Moritz Helias; Mitsuhisa Sato; Itaru Kitayama; Susanne Kunkel
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
BMC Neuroscience | 2013
Susanne Kunkel; Maximilian Schmidt; Jochen Martin Eppler; Hans E. Plesser; Jun Igarashi; Gen Masumoto; Tomoki Fukai; Shin Ishii; Abigail Morrison; Markus Diesmann; Moritz Helias
Over the last couple of years, supercomputers such as the Blue Gene/Q system JUQUEEN in Julich and the K computer in Kobe have become available for neuroscience research. These massively parallel systems open the field for a new class of scientific questions as they provide the resources to represent and simulate brain-scale networks, but they also confront the developers of simulation software with a new class of problems. Initial tests with our neuronal network simulator NEST [1] on JUGENE (the predecessor of JUQUEEN) revealed that in order to exploit the memory capacities of such machines, we needed to improve the parallelization of the fundamental data structures. To address this, we developed an analytical framework [2], which serves as a guideline for a systematic and iterative restructuring of the simulation kernel. In December 2012, the 3rd generation technology was released with NEST 2.2, which enables simulations of 108 neurons and 10,000 synapses per neuron on the K computer [3]. Even though the redesign of the fundamental data structures of NEST is driven by the demand for simulations of interacting brain areas, we do not aim at solutions tailored to a specific brain-scale model or computing architecture. Our goal is to maintain a single highly scalable code base that meets the requirements of such simulations whilst still performing well on modestly dimensioned lab clusters and even laptops. Here, we introduce the 4th generation simulation kernel and describe the development workflow that yielded the following three major improvements: the self-collapsing connection infrastructure, which takes up significantly less memory in the case of few local targets, the compacted node infrastructure, which causes only negligible constant serial memory overhead, and the reduced memory usage of synapse objects, which does not affect the precision of synaptic state variables. The improved code does not compromise on the general usability of NEST and will be merged into the common code base to be released with NEST 2.4. We show that with the 4g technology it will be possible to simulate networks of 109 neurons and 10,000 synapses per neuron on the K computer.
Frontiers in Neuroinformatics | 2018
Jakob Jordan; Tammo Ippen; Moritz Helias; Itaru Kitayama; Mitsuhisa Sato; Jun Igarashi; Markus Diesmann; Susanne Kunkel
[This corrects the article DOI: 10.3389/fninf.2018.00002.].
Second International Workshop, BrainComp 2015, Cetraro, Italy, July 6-10, 2015 | 2015
Jan Hahne; Moritz Helias; Susanne Kunkel; Jun Igarashi; Itaru Kitayama; Brian J. N. Wylie; Matthias Bolten; Andreas Frommer; Markus Diesmann
Contemporary simulation technology for neuronal networks enables the simulation of brain-scale networks using neuron models with a single or a few compartments. However, distributed simulations at full cell density are still lacking the electrical coupling between cells via so called gap junctions. This is due to the absence of efficient algorithms to simulate gap junctions on large parallel computers. The difficulty is that gap junctions require an instantaneous interaction between the coupled neurons, whereas the efficiency of simulation codes for spiking neurons relies on delayed communication. In a recent paper [15] we describe a technology to overcome this obstacle. Here, we give an overview of the challenges to include gap junctions into a distributed simulation scheme for neuronal networks and present an implementation of the new technology available in the NEural Simulation Tool (NEST 2.10.0). Subsequently we introduce the usage of gap junctions in model scripts as well as benchmarks assessing the performance and overhead of the technology on the supercomputers JUQUEEN and K computer.
F1000Research | 2014
Jan Morén; Jun Igarashi; Osamu Shouno; Manish N. Sreenivasa; Ko Ayusawa; Yoshihiko Nakamura; Kenji Doya
Neuroscience Research | 2010
Jun Igarashi; Yoshikazu Isomura; Tomoki Fukai