Hitoshi Sakagami
University of Hyogo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hitoshi Sakagami.
Journal of Physics: Conference Series | 2010
Tomoyuki Johzaki; Hideo Nagatomo; Atsushi Sunahara; H-B Cai; Hitoshi Sakagami; Kunioki Mima
The core heating properties of a Au cone-attached CD shell target in FIREX-I were investigated by the integrated simulations with FI3 code systems. The importance of fat electron transport in the deformed cone tip was shown. In addition to the collisional scattering and drag in the Au cone, the strong resistive magnetic field is generated at the top of the cone tip when the tip shape is deformed. Due to this field, the fast electrons are strongly scattered, and then the core heating efficiency becomes smaller compared with the case neglecting the magnetic field.
society of instrument and control engineers of japan | 2002
Manabu Nii; Kousuke Ogino; Tomokazu Sakabe; Hitoshi Sakagami; Yutaka Takahashi
In this paper, we propose a parallelized rule extraction method from trained neural networks for high-dimensional pattern classification problems. In the rule extraction method, we have to examine all combinations of antecedent fuzzy sets for extracting fuzzy rules. For high-dimensional problems, the number of possible combinations is increased exponentially. To address this difficulty, we parallelize the rule extraction method.
parallel and distributed computing: applications and technologies | 2004
Tomoya Sakaguchi; Hitoshi Sakagami; Manabu Nii; Yutaka Takahashi
When one application needs results calculated by another application simulating different phenomena to simulate some phenomena, users must write communication codes to exchange data between two applications. Because users require to program codes as simple as possible in general, writing communication codes should be more easier. It is, however, complicated for users to implement these codes using the existing method. In this paper, we have proposed the Distributed Computing Collaboration Protocol as a simple user interface for communication between application programs.
Parallel Computational Fluid Dynamics 2002#R##N#New Frontiers and Multi-disciplinary Applications | 2003
Hitoshi Sakagami; Takao Mizuno; Shingo Furubayashi
Abstract. We have compared the HPF capability among Japanese HPF compilers that were implemented on NEC SX-4, SX-5, Fujitsu VPP800 and Hitachi SR8000 with a three-dimensional fluid code, which is originally written in Fortran77, and have investigated the compatibility for these machines, using the same codes. We have found that HPF could achieve good performance with almost the same source code, but some improvements should be needed to preserve the complete compatibility of HPF source codes. We have also manually tuned communications by HPF/JA extensions, which give users more control over sophisticated parallelization and communication optimizations, to improve sustained performance on Fujitsu VPP800. Unfortunately we could not achieve significant improvement with this hardware, because a computation grain was not large enough to exhibit effects of optimizations with HPF/JA directives. The manual methods to tune communications by users, however, are essential especially for a large number of processors due to the Amdahl’s law.
Journal of Physics: Conference Series | 2010
Hideo Nagatomo; Tomoyuki Johzaki; Atsushi Sunahara; H. Shiraga; Hitoshi Sakagami; Hong-bo Cai; Kunioki Mima
In the Fast ignition, formation of highly compressed core plasma is one of critical issue. In this work, the effect hydrodynamic instability in cone-guided shell implosion is studied. Two-dimensional radiation hydrodynamic simulations are carried out where realistic seeds of Rayleigh-Taylor instability are imposed. Preliminary results suggest that the instability reduces implosion performance, such as implosion velocity, areal density, and maximum density. In perturbed target implosion, the break-up time of the tip of the cone is earlier than that of ideal unperturbed target implosion case. This is crucial matter for the Fast ignition because the pass for the heating laser is filled with plasma before the shot of heating laser. A sophisticated implosion design of stable and low in-flight aspect ratio is necessary for cone-guided shell implosion.
Journal of Physics: Conference Series | 2010
M. Hata; Hitoshi Sakagami; Atsushi Sunahara; Tomoyuki Johzaki; Hideo Nagatomo
In fast ignition, the intensity of an unavoidable pre-pulse of a heating laser is still high to generate plasmas even if the contrast ratio is high. So, a preformed plasma (pre-plasma) is created before a main pulse of the heating laser irradiates a target. The recent research suggests that coating an inner surface of the cone of the cone-guided target with the low-density foam enhances fast electron generation. But the pre-plasma and the foam structure, which consists of dense CH plasmas and voids, are ignored in that research. In this paper, effects of the pre-plasma and the foam structure on the fast electron generation are investigated with the use of an one-dimensional Particle-In-Cell code. It is found that effects of the pre-plasma on the fast electron generation are small and the foam structure weakens the generation of fast electrons with the moderate energy.
ieee international conference on high performance computing data and analytics | 2002
Hitoshi Sakagami; Shingo Furubayashi
The lack of performance portability has been disheartening scientific application users to develop portable programs written in HPF. As the users would like to run the same source code on different parallel machines as fast as possible, we have investigated the performance portability for Japanese HPF compilers (NEC and Fujitsu) with a special benchmark suite. We got good performance in most cases with DISTRIBUTE and INDEPENDENT directives on NEC SX-5, but Fujitsu VPP800 required to explicitly force no communication inside parallel loops with additional LOCAL directives. It was also found that manual optimizations for communication with HPF/JA extensions were very useful to tune parallel performance.
JOURNAL OF THE FLOW VISUALIZATION SOCIETY OF JAPAN | 2001
Tomokazu Sakabe; Hitoshi Sakagami; Manabu Nii; Yutaka Takahashi
Although we can run large scale 3D simulations using high performance parallel computers, it could be difficult to save enormous simulation results in a storage as raw data for post visualization. Thus results are desired to be directly visualized and stored as small size images during simulations. We have developed an Integrated Volume Rendering System which consists of two subsystems. One is the interactive parameter setting system which makes users easy to determine complex rendering parameters. The other is the batch rendering system which is directly called from FORTRAN code to perform the volume rendering with the parameters set by the above system.
ieee international conference on high performance computing data and analytics | 2000
Katsunobu Nishihara; Hirokazu Amitani; Yuko Fukuda; Tetsuya Honda; Y. Kawata; Yuko Ohashi; Hitoshi Sakagami; Yoshitaka Sizuki
The use of a three-dimensional PIC (Particle-in-Cell) simulation is dispensable in the studies of nonlinear plasma physics, such as ultra-intense laser interactions with plasmas. The three-dimensional simulation requires a large number of particles more than 107 particles. It is therefore very important to develop a parallelization and vectorization scheme of the PIC code and a visualization method of huge simulation data. In this paper we present a new parallelization scheme suitable for a present day supercomputer and a construction method of scientific color animations to analyze simulation data. We also discuss the advantage of the Abe-Nishihara vectorization method for a large scale PIC simulations. n nMost of supercomputers in present day consists of multi nodes and each node has multi processors with a sheared memory. We have developed a new parallelization scheme in which domain decomposition is applied among nodes and particle decomposition is used for processors within a nodes. The domain decomposition in PIC requires the exchange of two kinds of data between neighboring domains. One is particle data, such as particle position and velocity, when a particle crosses the boundary between the neighboring domains. The other is field data, such as electric field intensity and current density in the boundary region. In the three-dimensional Electro-magnetic PIC forty two-dimensional variables are transferred to the neighboring domain for each boundary surface. MPI (Message Passive Interface) has been used for the transmission of these data between the nodes. The particle and field data are respectively stored once in one-dimensional data and they are then sent to the other node. This reduces the number of communication. The particle decomposition is performed by using auto-parallelization of do-loop. We measured the scalability of the layered parallelization scheme for the particle number of 25,600,000 and the mesh number of 128×128×128 with the use of sixteen processors of NEC SX-5. The layered parallelization is shown to provide a scalable acceleration of computation for the large system of PIC simulations.
IEICE Transactions on Communications | 1995
Yutaka Takahashi; Hitoshi Sakagami