Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tomoaki Ishiyama is active.

Publication


Featured researches published by Tomoaki Ishiyama.


ieee international conference on high performance computing data and analytics | 2012

4.45 Pflops astrophysical N -body simulation on K computer: the gravitational trillion-body problem

Tomoaki Ishiyama; Keigo Nitadori; Junichiro Makino

As an entry for the 2012 Gordon-Bell performance prize, we report performance results of astrophysical N-body simulations of one trillion particles performed on the full system of K computer. This is the first gravitational trillion-body simulation in the world. We describe the scientific motivation, the numerical algorithm, the parallelization strategy, and the performance analysis. Unlike many previous Gordon-Bell prize winners that used the tree algorithm for astrophysical N-body simulations, we used the hybrid TreePM method, for similar level of accuracy in which the short-range force is calculated by the tree algorithm, and the long-range force is solved by the particle-mesh algorithm. We developed a highly-tuned gravity kernel for short-range forces, and a novel communication algorithm for long-range forces. The average performance on 24576 and 82944 nodes of K computer are 1.53 and 4.45 Pflops, which correspond to 49% and 42% of the peak speed.


Publications of the Astronomical Society of Japan | 2009

GreeM: Massively Parallel TreePM Code for Large Cosmological N-body Simulations

Tomoaki Ishiyama; Toshiyuki Fukushige; Junichiro Makino

In this paper, we describe the implementation and performance of GreeM, a massively parallel TreePM code for large-scale cosmological N-body simulations. GreeM uses a recursive multi-section algorithm for domain decomposition. The size of the domains are adjusted so that the total calculation time of the force becomes the same for all processes. The loss of performance due to non-optimal load balancing is around 4%, even for more than 10^3 CPU cores. GreeM runs efficiently on PC clusters and massively-parallel computers such as a Cray XT4. The measured calculation speed on Cray XT4 is 5 \times 10^4 particles per second per CPU core, for the case of an opening angle of \theta=0.5, if the number of particles per CPU core is larger than 10^6.


The Astrophysical Journal | 2009

VARIATION OF THE SUBHALO ABUNDANCE IN DARK MATTER HALOS

Tomoaki Ishiyama; Toshiyuki Fukushige; Junichiro Makino

We analyzed the statistics of subhalo abundance of galaxy-sized and giant-galaxy-sized halos formed in a high-resolution cosmological simulation of a 46.5 Mpc cube with the uniform mass resolution of 10{sup 6} M {sub sun}. We analyzed all halos with mass more than 1.5 x 10{sup 12} M {sub sun} formed in this simulation box. The total number of halos was 125. We found that the subhalo abundance, measured by the number of subhalos with maximum rotation velocity larger than 10% of that of the parent halo, shows large halo-to-halo variations. The results of recent ultra-high-resolution runs fall within the variation of our samples. We found that the concentration parameter and the radius at the moment of the maximum expansion show fairly tight correlation with the subhalo abundance. This correlation suggests that the variation of the subhalo abundance is at least partly due to the difference in the formation history. Halos formed earlier have a smaller number of subhalos at present.


Monthly Notices of the Royal Astronomical Society | 2013

Evolution of star clusters in a cosmological tidal field

Steven Rieder; Tomoaki Ishiyama; Paul Langelaan; Junichiro Makino; Stephen L. W. McMillan; Simon Portegies Zwart

We present a method to couple N-body star cluster simulations to a cosmological tidal field, using the Astrophysical Multipurpose Software Environment. We apply this method to star clusters embedded in the CosmoGrid dark matter-only LambdaCDM simulation. Our star clusters are born at z = 10 (corresponding to an age of the Universe of about 500 Myr) by selecting a dark matter particle and initializing a star cluster with 32,000 stars on its location. We then follow the dynamical evolution of the star cluster within the cosmological environment. We compare the evolution of star clusters in two Milky-Way size haloes with a different accretion history. The mass loss of the star clusters is continuous irrespective of the tidal history of the host halo, but major merger events tend to increase the rate of mass loss. From the selected two dark matter haloes, the halo that experienced the larger number of mergers tends to drive a smaller mass loss rate from the embedded star clusters, even though the final masses of both haloes are similar. We identify two families of star clusters: native clusters, which become part of the main halo before its final major merger event, and the immigrant clusters, which are accreted upon or after this event; native clusters tend to evaporate more quickly than immigrant clusters. Accounting for the evolution of the dark matter halo causes immigrant star clusters to retain more mass than when the z=0 tidal field is taken as a static potential. The reason for this is the weaker tidal field experienced by immigrant star clusters before merging with the larger dark matter halo.


ieee international conference on high performance computing data and analytics | 2014

24.77 Pflops on a gravitational tree-code to simulate the Milky Way Galaxy with 18600 GPUs

Jeroen Bédorf; Evghenii Gaburov; Michiko S. Fujii; Keigo Nitadori; Tomoaki Ishiyama; Simon Portegies Zwart

We have simulated, for the first time, the long term evolution of the Milky Way Galaxy using 51 billion particles on the Swiss Piz Daint supercomputer with our N-body gravitational tree-code Bonsai. Herein, we describe the scientific motivation and numerical algorithms. The Milky Way model was simulated for 6 billion years, during which the bar structure and spiral arms were fully formed. This improves upon previous simulations by using 1000 times more particles, and provides a wealth of new data that can be directly compared with observations. We also report the scalability on both the Swiss Piz Daint and the US ORNL Titan. On Piz Daint the parallel efficiency of Bonsai was above 95%. The highest performance was achieved with a 242 billion particle Milky Way model using 18600 GPUs on Titan, thereby reaching a sustained GPU and application performance of 33.49 Pflops and 24.77 Pflops respectively.


IEEE Computer | 2010

Simulating the Universe on an Intercontinental Grid

Simon Portegies Zwart; Tomoaki Ishiyama; Derek Groen; Keigo Nitadori; Junichiro Makino; Cees de Laat; Stephen L. W. McMillan; Kei Hiraki; Stefan Harfst; Paola Grosso

The computational requirements of simulating a sector of the universe led an international team of researchers to try concurrent processing on two supercomputers half a world apart. Data traveled nearly 27,000 km in 0.277 second, crisscrossing two oceans to go from Amsterdam to Tokyo and back.


The Astrophysical Journal | 2016

WHERE ARE THE LOW-MASS POPULATION III STARS?

Tomoaki Ishiyama; Kae Sudo; Shingo Yokoi; Kenji Hasegawa; Nozomu Tominaga; Hajime Susa

We study the number and the distribution of low mass Pop III stars in the Milky Way. In our numerical model, hierarchical formation of dark matter minihalos and Milky Way sized halos are followed by a high resolution cosmological simulation. We model the Pop III formation in H2 cooling minihalos without metal under UV radiation of the Lyman-Werner bands. Assuming a Kroupa IMF from 0.15 to 1.0 Msun for low mass Pop III stars, as a working hypothesis, we try to constrain the theoretical models in reverse by current and future observations. We find that the survivors tend to concentrate on the center of halo and subhalos. We also evaluate the observability of Pop III survivors in the Milky Way and dwarf galaxies, and constraints on the number of Pop III survivors per minihalo. The higher latitude fields require lower sample sizes because of the high number density of stars in the galactic disk, the required sample sizes are comparable in the high and middle latitude fields by photometrically selecting low metallicity stars with optimized narrow band filters, and the required number of dwarf galaxies to find one Pop III survivor is less than ten at 10 Msun.


Publications of the Astronomical Society of Japan | 2015

The ν2GC simulations: Quantifying the dark side of the universe in the Planck cosmology

Tomoaki Ishiyama; Motohiro Enoki; Masakazu A. R. Kobayashi; Ryu Makiya; Masahiro Nagashima; Taira Oogi

We present the evolution of dark matter halos in six large cosmological N-body simulations, called the


The Astrophysical Journal | 2014

ANTI-HIERARCHICAL EVOLUTION OF THE ACTIVE GALACTIC NUCLEUS SPACE DENSITY IN A HIERARCHICAL UNIVERSE

Motohiro Enoki; Tomoaki Ishiyama; Masakazu A. R. Kobayashi; Masahiro Nagashima

\nu^2


Monthly Notices of the Royal Astronomical Society | 2014

The connection between the cusp-to-core transformation and observational universalities of DM haloes

Go Ogiya; Masao Mori; Tomoaki Ishiyama; Andreas Burkert

GC (New Numerical Galaxy Catalog) simulations on the basis of the LCDM cosmology consistent with observational results obtained by the Planck satellite. The largest simulation consists of

Collaboration


Dive into the Tomoaki Ishiyama's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masakazu Kobayashi

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge