Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuichi Inadomi is active.

Publication


Featured researches published by Yuichi Inadomi.


Chemical Physics Letters | 2003

Fragment molecular orbital method: application to molecular dynamics simulation, 'ab initio FMO-MD'

Yuto Komeiji; Tatsuya Nakano; Kaori Fukuzawa; Yutaka Ueno; Yuichi Inadomi; Tadashi Nemoto; Masami Uebayasi; Dmitri G. Fedorov; Kazuo Kitaura

Abstract A quantum molecular simulation method applicable to biological molecules is proposed. Ab initio fragment molecular orbital method-based molecular dynamics (FMO-MD) combines molecular dynamics simulation with the ab initio fragment molecular orbital method. Here, FMO computes the force acting on each atom’s nucleus while MD computes the nuclei’s time-dependent evolutions. FMO-MD successfully simulated a small polypeptide, demonstrating the method’s applicability to biological molecules.


Chemical Physics Letters | 2002

Definition of molecular orbitals in fragment molecular orbital method

Yuichi Inadomi; Tatsuya Nakano; Kazuo Kitaura; Umpei Nagashima

We propose an explicit definition of molecular orbitals in the fragment molecular orbital (FMO) method. We evaluated the accuracy of this method by using a conventional MO method and the FMO method to compare the calculated molecular orbitals and their orbital energies for four poly glycine molecules. These comparisons show that the molecular orbitals and their orbital energies calculated with the FMO method are within about 1% difference of those calculated with the conventional method. Therefore, the molecular orbitals calculated with the FMO method can be used for accurate calculations of chemical properties of large molecules.


ieee international conference on high performance computing data and analytics | 2008

Performance prediction of large-scale parallell system and application using macro-level simulation

Ryutaro Susukita; Hisashige Ando; Mutsumi Aoyagi; Hiroaki Honda; Yuichi Inadomi; Koji Inoue; Shigeru Ishizuki; Yasunori Kimura; Hidemi Komatsu; Motoyoshi Kurokawa; Kazuaki Murakami; Hidetomo Shibamura; Shuji Yamamura; Yunqing Yu

To predict application performance on an HPC system is an important technology for designing the computing system and developing applications. However, accurate prediction is a challenge, particularly, in the case of a future coming system with higher performance. In this paper, we present a new method for predicting application performance on HPC systems. This method combines modeling of sequential performance on a single processor and macro-level simulations of applications for parallel performance on the entire system. In the simulation, the execution flow is traced but kernel computations are omitted for reducing the execution time. Validation on a real terascale system showed that the predicted and measured performance agreed within 10% to 20 %. We employed the method in designing a hypothetical petascale system of 32768 SIMD-extended processor cores. For predicting application performance on the petascale system, the macro-level simulation required several hours.


ieee international conference on high performance computing data and analytics | 2015

Analyzing and mitigating the impact of manufacturing variability in power-constrained supercomputing

Yuichi Inadomi; Tapasya Patki; Koji Inoue; Mutsumi Aoyagi; Barry Rountree; Martin Schulz; David K. Lowenthal; Yasutaka Wada; Keiichiro Fukazawa; Masatsugu Ueda; Masaaki Kondo; Ikuo Miyoshi

A key challenge in next-generation supercomputing is to effectively schedule limited power resources. Modern processors suffer from increasingly large power variations due to the chip manufacturing process. These variations lead to power inhomogeneity in current systems and manifest into performance inhomogeneity in power constrained environments, drastically limiting supercomputing performance. We present a first-of-its-kind study on manufacturing variability on four production HPC systems spanning four microarchitectures, analyze its impact on HPC applications, and propose a novel variation-aware power budgeting scheme to maximize effective application performance. Our low-cost and scalable budgeting algorithm strives to achieve performance homogeneity under a power constraint by deriving application-specific, module-level power allocations. Experimental results using a 1,920 socket system show up to 5.4X speedup, with an average speedup of 1.8X across all benchmarks when compared to a variation-unaware power allocation scheme.


Journal of Computational Chemistry | 2009

Fragment molecular orbital study of the electronic excitations in the photosynthetic reaction center of Blastochloris viridis

Tsutomu Ikegami; Toyokazu Ishida; Dmitri G. Fedorov; Kazuo Kitaura; Yuichi Inadomi; Hiroaki Umeda; Mitsuo Yokokawa; Satoshi Sekiguchi

All electron calculations were performed on the photosynthetic reaction center of Blastochloris viridis, using the fragment molecular orbital (FMO) method. The protein complex of 20,581 atoms and 77,754 electrons was divided into 1398 fragments, and the two‐body expansion of FMO/6‐31G* was applied to calculate the ground state. The excited electronic states of the embedded electron transfer system were separately calculated by the configuration interaction singles approach with the multilayer FMO method. Despite the structural symmetry of the system, asymmetric excitation energies were observed, especially on the bacteriopheophytin molecules. The asymmetry was attributed to electrostatic interaction with the surrounding proteins, in which the cytoplasmic side plays a major role.


Journal of Computational Chemistry | 2010

Parallel Fock matrix construction with distributed shared memory model for the FMO‐MO method

Hiroaki Umeda; Yuichi Inadomi; Toshio Watanabe; Toru Yagi; Takayoshi Ishimoto; Tsutomu Ikegami; Hiroto Tadano; Tetsuya Sakurai; Umpei Nagashima

A parallel Fock matrix construction program for FMO‐MO method has been developed with the distributed shared memory model. To construct a large‐sized Fock matrix during FMO‐MO calculations, a distributed parallel algorithm was designed to make full use of local memory to reduce communication, and was implemented on the Global Array toolkit. A benchmark calculation for a small system indicates that the parallelization efficiency of the matrix construction portion is as high as 93% at 1,024 processors. A large FMO‐MO application on the epidermal growth factor receptor (EGFR) protein (17,246 atoms and 96,234 basis functions) was also carried out at the HF/6‐31G level of theory, with the frontier orbitals being extracted by a Sakurai‐Sugiura eigensolver. It takes 11.3 h for the FMO calculation, 49.1 h for the Fock matrix construction, and 10 min to extract 94 eigen‐components on a PC cluster system using 256 processors.


conference on high performance computing (supercomputing) | 2005

Full Electron Calculation Beyond 20,000 Atoms: Ground Electronic State of Photosynthetic Proteins

Tsutomu Ikegami; Toyokazu Ishida; Dmitri G. Fedorov; Kazuo Kitaura; Yuichi Inadomi; Hiroaki Umeda; Mitsuo Yokokawa; Satoshi Sekiguchi

A full electron calculation for the photosynthetic reaction center of Rhodopseudomonas viridis was performed by using the fragment molecular orbital (FMO) method on a massive cluster computer. The target system contains 20,581 atoms and 77,754 electrons, which was divided into 1,398 fragments. According to the FMO prescription, the calculations of the fragments and pairs of the fragments were conducted to obtain the electronic state of the system. The calculation at RHF/6-31G* level of theory took 72.5 hours with 600 CPUs. The CPUs were grouped into several workers, to which the calculations of the fragments were dispatched. An uneven CPU grouping, where two types of workers are generated, was shown to be efficient.


cluster computing and the grid | 2014

Power Consumption Evaluation of an MHD Simulation with CPU Power Capping

Keiichiro Fukazawa; Masatsugu Ueda; Mutsumi Aoyagi; Tomonori Tsuhata; Kyohei Yoshida; Aruta Uehara; Masakazu Kuze; Yuichi Inadomi; Koji Inoue

Recently to achieve the Exa-flops next generation computer system, the power consumption becomes the important issue. On the other hand, the power consumption character of application program is not so considered now. In this study we examine the power character of our Magneto hydrodynamic (MHD) simulation code for the global magnetosphere to evaluate the power consumption behavior of the simulation code under the CPU power capping on the parallel computer system. As a result, it is confirmed that there are different power consumption parts in the MHD simulation code, which the execution performance decreases or does not change under the CPU power capping. This indicates the capability of performance optimization with the power capping.


international conference on large scale scientific computing | 2005

A hybrid parallel method for large sparse eigenvalue problems on a grid computing environment using Ninf-G/MPI

Tetsuya Sakurai; Yoshihisa Kodaki; Hiroaki Umeda; Yuichi Inadomi; Toshio Watanabe; Umpei Nagashima

In the present paper, we propose a hybrid parallel method for large sparse eigenvalue problems in a grid computing environment. A moment-based method that finds several eigenvalues and their corresponding eigenvectors in a given domain is used. This method is suitable for master-worker type parallel programming models. In order to improve the parallel efficiency of the method, we propose a hybrid implementation using a GridRPC system Ninf-G and MPI. We examined the performance of the proposed method in an environment where several PC clusters are used.


Journal of Computational Chemistry | 2009

Parallel Fock matrix construction program for molecular orbital calculation--specific computer with a hierarchical network.

Hiroaki Umeda; Yuichi Inadomi; Hiroaki Honda; Umpei Nagashima

A parallel Fock matrix construction program for a hierarchical network has been developed on the molecular orbital calculation‐specific EHPC system. To obtain high parallelization efficiency on the hierarchical network system, a multilevel dynamic load‐balancing scheme was adopted, which provides equal load balance and localization of communications on a tree‐structured hierarchical network. The parallelized Fock matrix construction routine was implemented into a GAMESS program on the EHPC system, which has a tree‐structured hierarchical network. Benchmark results on a 63‐processor system showed high parallelization efficiency even on the tree‐structured hierarchical network.

Collaboration


Dive into the Yuichi Inadomi's collaboration.

Top Co-Authors

Avatar

Umpei Nagashima

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Toshio Watanabe

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge