Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by M. Kojima.
Nuclear Fusion | 2011
H. Nakanishi; M. Ohsuna; M. Kojima; S. Imazu; M. Nonomura; T. Yamamoto; M. Emoto; Masafumi Yoshida; C. Iwata; M. Shoji; Y. Nagayama; K. Kawahata; M. Hasegawa; A. Higashijima; K. Nakamura; Yasushi Ono; M. Yoshikawa; S. Urushidani
A high-performance data acquisition (DAQ) system has been developed for steady-state fusion experiments at the Large Helical Device (LHD). Its significant characteristics are 110u2009MBu2009s−1 continuous DAQ capability and the performance scalability using an unlimited number of DAQ units. Incoming data streams are first transferred temporarily onto the shared random access memory, and then cut into definite time chunks to be stored. They are also thinned out to 1/N to be served for the real-time monitoring clients. In LHD steady-state experiment, the DAQ cluster has established the world record for acquiring 90u2009GB/shot. The established technology of this steady-state acquisition and store can contribute to the ITER experiments whose data amount is estimated in the range 100 or 1000u2009GB/shot. This system also acquires experimental data from multiple remote sites through the fusion-dedicated virtual private network in Japan. The speed lowering problem in long-distance TCP/IP data transfer has been improved by the packet pacing optimization. The demonstrated collaboration scheme will be analogous to that of ITER and the supporting machines.
Fusion Science and Technology | 2010
H. Nakanishi; M. Ohsuna; M. Kojima; S. Imazu; M. Nonomura; M. Hasegawa; K. Nakamura; A. Higashijima; M. Yoshikawa; M. Emoto; T. Yamamoto; Y. Nagayama; K. Kawahata
Abstract The data acquisition (DAQ) and management system of the Large Helical Device (LHD), named the LABCOM system, has been in development since 1995. The recently acquired data have grown to 7 gigabytes per shot, 10 times bigger than estimated before the experiment. In 2006 during 1-h pulse experiments, 90 gigabytes of data was acquired, a new world record. This data explosion has been enabled by the massively distributed processing architecture and the newly developed capability of real-time streaming acquisition. The former provides linear expandability since increasing the number of parallel DAQs avoids I/O bottlenecks. The latter improves the unit performance from 0.7 megabytes/s in conventional CAMAC digitizers to nonstop 110 megabytes/s in CompactPCI. The technical goal of this system is to be able to handle one hundred 100 megabytes/s concurrent DAQs even for steady-state plasma diagnostics. This is similar to the data production rate of the next-generation experiments, such as ITER. The LABCOM storage has several hundred terabytes of storage in double-tier structure: The first consists of tens of hard drive arrays, and the second some Blu-ray Disc libraries. Multiplex and redundant storage servers are mandatory for higher availability and throughputs. They together serve sharable volumes on Red Hat GFS2 cluster file systems. The LABCOM system is used not only for LHD but also for the QUEST and GAMMA10 experiments, creating a new Fusion Virtual Laboratory remote participation environment that others can access regardless of their location.
Fusion Engineering and Design | 2000
H. Nakanishi; M. Emoto; M. Kojima; M. Ohsuna; S. Komada
Abstract The new data acquisition system of large helical device (LHD) diagnostics, i.e. LABCOM system, has successfully started its operation in March 1998. It has a simple but massive parallel-processing (MPP) structure by means of multiple PC/Windows NT environment, and the most significant methodology adopted for it is the object-oriented (OO) data handling through the whole system. The functions and data substances of the acquisition system are described in autonomous objects with the corresponding C++ class definitions. The object-oriented database management system (ODBMS) will be the only solution to provide a vast and virtual storage space for storing an enormous number of archiving data objects. Commercial ODBMS product ‘O2’ are installed on each diagnostic acquisition computer. Practical O2 investigations showed 300–400 kB/s as the data storing rate, whereas the data transfer rate from CAMAC digitizers to the computer is up to 700 kB/s in this system. Applying the GNU projects ‘zlib’ compression library for the data size reduction compensates this rate gap. Through the first and second (∼#7132) LHD experimental campaigns, the LABCOM system acquired about 400 GB raw data, with maximum 120 MB per shot. These experiences proved that OO technology has great promise for the next generation of the data acquisition and storage system in fusion research experiments.
international conference on advanced applied informatics | 2013
Kenta Funaki; Teruhisa Hochin; Hiroki Nomiya; H. Nakanishi; M. Kojima
This paper proposes two methods of indexing a large amount of data in order to resolve the issues about time limitation and size limitation. One is the method that an index is created one by one. When the size of an index reaches the limitation, a new index is created. The other is the method that several indexes are created in advance. Data are inserted into the indexes according to the Round-robin scheme. The performance evaluation experiments show that the proposed one-by-one method provides the best insertion performance and that the proposed methods provide better retrieval performance than the conventional method. In addition, parallel processing to indexes divided by the proposed methods could accelerate the retrieval.
international conference on advanced applied informatics | 2014
Kenta Funaki; Teruhisa Hochin; Hiroki Nomiya; H. Nakanishi; M. Kojima
This paper proposes a parallel indexing scheme of a large amount of data in order to resolve the issues about time limitation. Three kinds of computing-nodes are introduced. These are reception-nodes, representative-nodes, and normal-nodes. A reception-node receives data for insertion. A representative-node receives queries. Normal-nodes retrieve data from indexes. Here, three kinds of indexes are introduced. These are a whole-index, a partial-index, and a reception-index. In a partial-index, data are stored. In a whole-index, partial-indexes are stored as its data. In a reception-index, additional data are stored. The reception-index is moved to a normal-node, and becomes a partial-index. The proposed scheme is also a data distribution scheme for shortening the insertion time. A reception-node accepts additional data even if the index is already built.
software engineering, artificial intelligence, networking and parallel/distributed computing | 2012
Ryohei Azuma; Teruhisa Hochin; Hiroki Nomiya; H. Nakanishi; M. Kojima
This paper uses Postgre SQL, a database management system, in the data management of the similarity retrieval of subsequences of waveforms. By using Postgre SQL, the management of data could become easy, and a multi-dimensional index structure named the R tree could be used. This paper examines the parallel processing of the similarity retrieval by dividing a table into several tables. It is shown that high retrieval performance can easily be obtained by using a high performance computer. It is shown that using Postgre SQL could make the division of a table easy because R tree indexes are automatically constructed.
Fusion Engineering and Design | 2004
H. Nakanishi; Teruhisa Hochin; M. Kojima
Fusion Engineering and Design | 2010
Teruhisa Hochin; Yoshihiro Yamauchi; H. Nakanishi; M. Kojima; Hiroki Nomiya
Fusion Engineering and Design | 2008
Teruhisa Hochin; Katsumasa Koyama; H. Nakanishi; M. Kojima
Fusion Engineering and Design | 2006
M. Ohsuna; H. Nakanishi; S. Imazu; M. Kojima; M. Nonomura; M. Emoto; Y. Nagayama; Haruhiko Okumura