Chaofeng Hou
Chinese Academy of Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chaofeng Hou.
ieee international conference on high performance computing data and analytics | 2013
Chaofeng Hou; Ji Xu; Peng Wang; Wen Lai Huang; Xiaowei Wang; Wei Ge; Xianfeng He; Li Guo; Jinghai Li
An efficient and highly scalable bond-order potential code has been developed for the molecular dynamics simulation of bulk silicon, reaching 1.87 Pflops (floating point operations per second) in single precision on 7168 graphic processing units (GPUs) of the Tianhe-1A system. Furthermore, by coupling GPUs and central processing units, we also simulated surface reconstruction of crystalline silicon at the sub-millimeter scale with more than 110 billion atoms, reaching 1.17 Pflops in single precision plus 92.1 Tflops in double precision on the entire Tianhe-1A system. Such simulations can provide unprecedented insight into a variety of microscopic behaviors or structures, such as doping, defects, grain boundaries, and surface reactions.
Computer Physics Communications | 2013
Chaofeng Hou; Ji Xu; Peng Wang; Wen Lai Huang; Xiaowei Wang
Different from previous molecular dynamics (MD) simulation with pair potentials and many-body potentials, an efficient and highly scalable algorithm for GPU-accelerated MD simulation of solid covalent crystals is described in detail in this paper using sophisticated many-body potentials, such as Tersoff potentials for silicon crystals. The algorithm has effectively taken advantage of the reordering and sorting of atoms and the hierarchical memory of a GPU. The final results indicate that, about 30.5% of the peak performance of a single GPU can be achieved with a speedup of about 650 over a contemporary CPU core, and more than 15 million atoms can be processed by a single GPU with a speed of around 2 ns/day. Furthermore, the proposed algorithm is scalable and transferable, which can be applied to other many-body interactions and related large-scale parallel computation
Molecular Simulation | 2012
Chaofeng Hou; Wei Ge
Graphics processing unit (GPU) is becoming a powerful computational tool in science and engineering. In this paper, different from previous molecular dynamics (MD) simulation with pair potentials and many-body potentials, two MD simulation algorithms implemented on a single GPU are presented to describe a special category of many-body potentials – bond order potentials used frequently in solid covalent materials, such as the Tersoff potentials for silicon crystals. The simulation results reveal that the performance of GPU implementations is apparently superior to their CPU counterpart. Furthermore, the proposed algorithms are generalised, transferable and scalable, and can be extended to the simulations with general many-body interactions such as Stillinger–Weber potential and so on.
Archive | 2013
Wei Ge; Ji Xu; Qingang Xiong; Xiaowei Wang; Feiguo Chen; Limin Wang; Chaofeng Hou; Ming Xu; Jinghai Li
This chapter serves as an introduction to the supercomputing works carried out at CAS-IPE following the strategy of structural consistency among the physics in the simulated systems, mathematical model, computational software expressing the numerical methods and algorithms, and finally architecture of the computer hardware (Li et al., From multiscale modeling to Meso-science—a chemical engineering perspective, 2012; Li et al., Meso-scale phenomena from compromise—a common challenge, not only for chemical engineering, 2009; Ge et al., Chem Eng Sci 66:4426–4458, 2011). Multi-scale simulation of gas-solid flow in continuum-discrete approaches and molecular dynamics simulation of crystalline silicon are taken as examples, both making full use of CPU-GPU hybrid supercomputers. This strategy is demonstrated to be effective and critical for achieving good scalability and efficiency in such simulations. The software and hardware systems thus designed have found wide applications in process engineering.
International Journal of Modern Physics C | 2013
Chengxiang Li; Wen Lai Huang; Chaofeng Hou; Wei Ge
The atomic structures of grain boundary (GB) and their effect on the performance of poly-Si thin film solar cell are studied by multi-scale simulations. First, the atomic structures of various GBs are calculated using molecular dynamics. Subsequently, the energy band diagram are obtained by ab-initio calculations. Then, finite difference method is performed to obtain solar cell performance. The results show that the Σ5 (twist) GB can greatly enhance the carriers recombination and results in small short-circuit current density (JSC) and open-circuit voltage (VOC). However, the Σ17 (twist and tilt) GBs have little influence on the cell performance. Also revealed in the simulations is that the GB near the p–n junction leads to very small JSC and VOC. When the distance between GB and p–n junction increases from about 1.10 μm to 3.65 μm, the conversion efficiency increases by about 29%. The thickness effect of solar cell containing the Σ5 (twist) GB on the cell performance is also studied. The results show that the conversion efficiency and JSC increase rapidly as the thickness increases from about 5.2 μm to 40 μm. When the thickness ranges from about 40 μm to 70 μm, the efficiency and the JSC both increase gradually and reach their own peak values at about 70 μm. When the thickness exceeds 70 μm, the efficiency and JSC both decrease gradually. However, the VOC keeps increasing with increase in thickness. The effects of GB on the carrier transport and recombination processes are discussed to understand the above results.
International Journal of Modern Physics C | 2012
Chaofeng Hou; Wei Ge
Graphics processing unit (GPU) is becoming a powerful computational tool in scientific and engineering fields. In this paper, for the purpose of the full employment of computing capability, a novel mode for parallel molecular dynamics (MD) simulation is presented and implemented on basis of multiple GPUs and hybrids with central processing units (CPUs). Taking into account the interactions between CPUs, GPUs, and the threads on GPU in a multi-scale and multilevel computational architecture, several cases, such as polycrystalline silicon and heat transfer on the surface of silicon crystals, are provided and taken as model systems to verify the feasibility and validity of the mode. Furthermore, the mode can be extended to MD simulation of other areas such as biology, chemistry and so forth.
Computer Physics Communications | 2018
Chenglong Zhang; Mingcan Zhao; Chaofeng Hou; Wei Ge
Abstract Searching of the interaction pairs and organization of the interaction processes are important steps in molecular dynamics (MD) algorithms and are critical to the overall efficiency of the simulation. Neighbor lists are widely used for these steps, where thicker skin can reduce the frequency of list updating but is discounted by more computation in distance check for the particle pairs. In this paper, we propose a new neighbor-list-based algorithm with a precisely designed multilevel skin which can reduce unnecessary computation on inter-particle distances. The performance advantages over traditional methods are then analyzed against the main simulation parameters on Intel CPUs and MICs (many integrated cores), and are clearly demonstrated. The algorithm can be generalized for various discrete simulations using neighbor lists.
Modelling and Simulation in Materials Science and Engineering | 2016
Chaofeng Hou; Ji Xu; Wei Ge; Jinghai Li
Nonequilibrium molecular dynamics simulation has been a powerful tool for studying the thermophysical properties of bulk silicon and silicon nanowires. Nevertheless, usually limited by the capacity and capability of computational resources, the traditional longitudinal and transverse simulation sizes are evidently restricted in a narrow range much less than the experimental scales, which seriously hinders the exploration of the thermal properties. In this research, based on a powerful and efficient molecular dynamics (MD) simulation method, the computation of thermal conductivity beyond the known Casimir size limits is realized. The longitudinal dimensions of the simulations significantly exceed the micrometer scale. More importantly, the lateral characteristic sizes are much larger than 10 nanometers, explicitly comparable with the silicon nanowires fabricated and measured experimentally, whereas the traditional simulation size is several nanometers. The powerful virtual experimental measurement provided in our simulations achieves the direct prediction of the thermal conductivity of bulk silicon and real-scale silicon nanowires, and delineates the complete longitudinal size dependence of their thermal conductivities, especially at the elusive mesoscopic scale. Furthermore, the presented measurement paves an exciting and promising way to explore in depth the thermophysical properties of other bulk covalent solids and their low-dimensional structures, such as nanowires and nanosheets.
Chemical Engineering Science | 2011
Wei Ge; Wei Wang; Ning Yang; Jinghai Li; Mooson Kwauk; Feiguo Chen; Jianhua Chen; Xiaojian Fang; Li Guo; Xianfeng He; Xinhua Liu; Yaning Liu; Bona Lu; Jian Wang; Junwu Wang; Limin Wang; Xiaowei Wang; Qingang Xiong; Ming Xu; Lijuan Deng; Yongsheng Han; Chaofeng Hou; Leina Hua; Wen Lai Huang; Bo Li; Chengxiang Li; Fei Li; Ying Ren; Ji Xu; Nan Zhang
Computational Materials Science | 2012
Wen Lai Huang; Wei Ge; Chengxiang Li; Chaofeng Hou; Xiaowei Wang; Xianfeng He