Keigo Nitadori
University of Tsukuba
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Keigo Nitadori.
Monthly Notices of the Royal Astronomical Society | 2012
Keigo Nitadori; Sverre J. Aarseth
We describe the use of Graphics Processing Units (GPUs) for speeding up the code NBODY6 which is widely used for direct
ieee international conference on high performance computing data and analytics | 2009
Tsuyoshi Hamada; Tetsu Narumi; Rio Yokota; Kenji Yasuoka; Keigo Nitadori; Makoto Taiji
N
ieee international conference on high performance computing data and analytics | 2012
Tomoaki Ishiyama; Keigo Nitadori; Junichiro Makino
-body simulations. Over the years, the
New Astronomy | 2008
Keigo Nitadori; Junichiro Makino
N^2
ieee international conference on high performance computing data and analytics | 2010
Tsuyoshi Hamada; Keigo Nitadori
nature of the direct force calculation has proved a barrier for extending the particle number. Following an early introduction of force polynomials and individual time-steps, the calculation cost was first reduced by the introduction of a neighbour scheme. After a decade of GRAPE computers which speeded up the force calculation further, we are now in the era of GPUs where relatively small hardware systems are highly cost-effective. A significant gain in efficiency is achieved by employing the GPU to obtain the so-called regular force which typically involves some 99 percent of the particles, while the remaining local forces are evaluated on the host. However, the latter operation is performed up to 20 times more frequently and may still account for a significant cost. This effort is reduced by parallel SSE/AVX procedures where each interaction term is calculated using mainly single precision. We also discuss further strategies connected with coordinate and velocity prediction required by the integration scheme. This leaves hard binaries and multiple close encounters which are treated by several regularization methods. The present nbody6-GPU code is well balanced for simulations in the particle range
Monthly Notices of the Royal Astronomical Society | 2011
Evghenii Gaburov; Keigo Nitadori
10^4-2 \times 10^5
New Astronomy | 2013
Ataru Tanikawa; Kohji Yoshikawa; Keigo Nitadori; Takashi Okamoto
for a dual GPU system attached to a standard PC.
Monthly Notices of the Royal Astronomical Society | 2015
Long Wang; Rainer Spurzem; Sverre J. Aarseth; Keigo Nitadori; Peter Berczik; M. B. N. Kouwenhoven; Thorsten Naab
As an entry for the 2009 Gordon Bell price/performance prize, we present the results of two different hierarchical N-body simulations on a cluster of 256 graphics processing units (GPUs). Unlike many previous N-body simulations on GPUs that scale as O(N2), the present method calculates the O(N log N) treecode and O(N) fast multipole method (FMM) on the GPUs with unprecedented efficiency. We demonstrate the performance of our method by choosing one standard application -a gravitational N-body simulation- and one non-standard application -simulation of turbulence using vortex particles. The gravitational simulation using the treecode with 1,608,044,129 particles showed a sustained performance of 42.15 TFlops. The vortex particle simulation of homogeneous isotropic turbulence using the periodic FMM with 16,777,216 particles showed a sustained performance of 20.2 TFlops. The overall cost of the hardware was 228,912 dollars. The maximum corrected performance is 28.1TFlops for the gravitational simulation, which results in a cost performance of 124 MFlops/
ieee international conference on high performance computing data and analytics | 2014
Jeroen Bédorf; Evghenii Gaburov; Michiko S. Fujii; Keigo Nitadori; Tomoaki Ishiyama; Simon Portegies Zwart
. This correction is performed by counting the Flops based on the most efficient CPU algorithm. Any extra Flops that arise from the GPU implementation and parameter differences are not included in the 124 MFlops/
IEEE Computer | 2010
Simon Portegies Zwart; Tomoaki Ishiyama; Derek Groen; Keigo Nitadori; Junichiro Makino; Cees de Laat; Stephen L. W. McMillan; Kei Hiraki; Stefan Harfst; Paola Grosso
.