Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takaki Nakamura is active.

Publication


Featured researches published by Takaki Nakamura.


international conference on advanced applied informatics | 2016

REC2: Restoration Method Using Combination of Replication and Erasure Coding

Hitoshi Kamei; Shinya Matsumoto; Takaki Nakamura; Hiroaki Muraoka

When large-scale disasters occur, such as the Great East Earthquake Japan occurred in March 2011, servers located in the affected area are sometimes disrupted. If the servers has important information, such as medical information, it is crucial to restore the servers because the information is utilized by medical activities. Therefore, the servers must be restored as soon as possible. We propose a restoration method using the combination of replication method and erasure coding method, called REstoration method using Combination of Replication and Erasure Coding (REC2.) REC2 applies the replication method to the meta-data of a file, and it also applies the erasure coding method to the user-data of the file. The combination achieves the both the high restoration throughput with the amount of the backup data reduced. In this paper, we describe the implementation details of REC2 and evaluation results. In the evaluation, we compare REC2 with the conventional restoration method about the amount of backup data and the restoration throughput. From the results, we found out that REC2 provides higher restoration throughput than the conventional restoration method while REC2 reduces the amount of backup data.


international conference on advanced applied informatics | 2016

A Guideline for Data Placement in Heterogeneous Distributed Storage Systems.

Shun Kaneko; Takaki Nakamura; Hitoshi Kamei; Hiroaki Muraoka

We propose a guideline for data placement in heterogeneous distributed storage systems. Heterogeneous distributed storage systems sometimes cause degradation of aggregate data throughput. The proposed guideline is that data accessed by a client should be placed equally to all servers. We evaluate a storage system configured by the proposed guideline by using sysstat and compare read/write data throughput between typical data placement and proposed data placement. Then we conclude that the proposed guideline improves the rate of aggregate data throughput while increasing the number of access streams.


Ieej Transactions on Electrical and Electronic Engineering | 2016

Redundancy-based iterative method to select multiple safe replication sites for risk-aware data replication: REDUNDANCY-BASED ITERATIVE METHOD

Shinya Matsumoto; Takaki Nakamura; Hiroaki Muraoka

This paper presents a method to solve the ‘replication site decision problem’ (RSDP) in a short computation time in the case of multiple replicas. RSDP is a problem of finding which combination with pairs of primary-replication sites is the safest when an assumed disaster such as an earthquake affects hundreds or thousands of sites. The existing representation of RSDP is solvable, but it frequently takes much computation time to seek an optimal solution because numerous replicas cause a rapid increase in the number of primary-replication site combinations. The proposed heuristic method, derived from redundancy-based problem partitioning and iterative parameter update techniques, reduces the number of combinations at the slight cost of data availability in the disaster-affected area. Computation time evaluation shows that the proposed method with two or three replicas costs at most twice or thrice, respectively, as much time as that of the original RSDP with one replica, independently of the number of sites. However, the original RSDP with two replicas costs 5 times as much time as that of the original RSDP at 10 sites and 3036 times at 80 sites. Moreover, the data availability cost of the proposed method is only 0.1%.


Ieej Transactions on Electrical and Electronic Engineering | 2015

Redundancy‐based iterative method to select multiple safe replication sites for risk‐aware data replication

Shinya Matsumoto; Takaki Nakamura; Hiroaki Muraoka

This paper presents a method to solve the ‘replication site decision problem’ (RSDP) in a short computation time in the case of multiple replicas. RSDP is a problem of finding which combination with pairs of primary-replication sites is the safest when an assumed disaster such as an earthquake affects hundreds or thousands of sites. The existing representation of RSDP is solvable, but it frequently takes much computation time to seek an optimal solution because numerous replicas cause a rapid increase in the number of primary-replication site combinations. The proposed heuristic method, derived from redundancy-based problem partitioning and iterative parameter update techniques, reduces the number of combinations at the slight cost of data availability in the disaster-affected area. Computation time evaluation shows that the proposed method with two or three replicas costs at most twice or thrice, respectively, as much time as that of the original RSDP with one replica, independently of the number of sites. However, the original RSDP with two replicas costs 5 times as much time as that of the original RSDP at 10 sites and 3036 times at 80 sites. Moreover, the data availability cost of the proposed method is only 0.1%.


IEEE Transactions on Magnetics | 2013

A Method for Eliminating Metadata Cache Deallocation Latency in Enterprise File Servers

Takayuki Fukatani; Keiichi Matsuzawa; Hitoshi Kamei; Masakuni Agetsuma; Takaki Nakamura

This paper presents an analysis of a performance bottleneck in enterprise file servers using Linux and proposes a modification to this operation system for avoiding the bottleneck. The analysis shows that metadata cache deallocation of current Linux causes large latency in file-request processing when the operational throughput of a file server becomes large. To eliminate the latency caused by metadata cache deallocation, a new method, called “split reclaim,” which divides metadata cache deallocation from conventional cache deallocation, is proposed. It is experimentally shown that the split-reclaim method reduces the worst response time by more than 95% and achieves three times higher throughput under a metadata-intensive workload. The split-reclaim method also reduces latency caused by cache deallocation under a general file-server workload by more than 99%. These results indicate that the split-reclaim method can eliminate metadata cache deallocation latency and make possible the use of commodity servers as enterprise file servers.


IEICE Transactions on Information and Systems | 2015

Discreet Method to Match Safe Site-Pairs in Short Computation Time for Risk-Aware Data Replication

Takaki Nakamura; Shinya Matsumoto; Hiroaki Muraoka


Archive | 2011

Risk-aware Data Replication to Massively Multi-sites against Widespread Disasters

Shinya Matsumoto; Takaki Nakamura; Hiroaki Muraoka


Ieej Transactions on Electronics, Information and Systems | 2017

An Evaluation of Restoration Time and the Amount of Backup Data of Data Replication Method for Large-scale Disasters

Hitoshi Kamei; Takaki Nakamura; Hiroaki Muraoka


Electronics and Communications in Japan | 2017

A Method of Shared File Cache for File Clone Function to Improve I/O Performance for Virtual Machines

Hitoshi Kamei; Osamu Yashiro; Takaki Nakamura


Journal of Information Processing | 2016

Comparison of Distance Limiting Methods for Risk-aware Data Replication in Urban and Suburban Area

Takaki Nakamura; Shinya Matsumoto; Masaru Tezuka; Satoru Izumi; Hiroaki Muraoka

Collaboration


Dive into the Takaki Nakamura's collaboration.

Researchain Logo
Decentralizing Knowledge