Keiichi Matsuzawa
Hitachi
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Keiichi Matsuzawa.
acm international conference on systems and storage | 2018
Keiichi Matsuzawa; Mitsuo Hayasaka; Takahiro Shinagawa
Upgrading file servers is indispensable for improving the performance, reducing the possibility of failures, and reducing the power consumption. To upgrade file servers, files must be migrated from the old to new servers, which poses three challenges: reducing the downtime during migration, reducing the migration overhead, and supporting the migration between heterogeneous servers. Existing technologies are difficult to achieve all of the three challenges. We propose a quick file migration scheme for heterogeneous servers. To reduce the downtime, we exploit the post-copy approach and introduce on-demand migration that allows file access before completing the migration. To reduce the overhead, we introduce background migration that migrates files as soon as possible without affecting the performance and incurs no overhead after the migration. To support heterogeneity, we introduce stub-based file management that requires no internal states of the old server. We implemented our scheme for Linux and supported the NFS and SMB protocols. The experimental results depict that the downtime was a maximum of 23 s in a 4-level 1000-file directory and the migration time was 70 min in NFS and 204 min in SMB with 242 GiB of data.
ieee international conference on cloud computing technology and science | 2017
Keiichi Matsuzawa; Takahiro Shinagawa
Storage cache prefetching is an effective technique for reducing the access latency in hierarchical storage systems when the access pattern is predictable based on access locality.In Infrastructure-as-a-Service (IaaS) clouds, however, storage virtualization significantly rearranges data placement, thereby reducing the spatial locality observed in the host operating system (OS). Moreover, IaaS clouds consolidate applications with various workloads that may change over time.Therefore, the access pattern changes both spatially and temporally.This paper proposes an adaptive storage cache prefetching scheme that uses structural and statistical information inside virtual machines (VMs). Observation of the applications file usage and internal file-layout information in the guest OS allows the host OS to capture spatial and temporal locality during storage access.In addition, application-level performance statistics allow the host OS to tune the prefetch speed adaptively to prevent performance degradation due to excessive prefetching.We implemented a prototype cache prefetching system that cooperates with Linux and PostgreSQL in a VM.Experiments using the TPCx-V benchmark showed that VM-awareness improved the performance by 17.1% compared with traditional prefetching.Our system achieved 3.15 times better performance than an existing non-prefetching caching system.
IEEE Transactions on Magnetics | 2013
Takayuki Fukatani; Keiichi Matsuzawa; Hitoshi Kamei; Masakuni Agetsuma; Takaki Nakamura
This paper presents an analysis of a performance bottleneck in enterprise file servers using Linux and proposes a modification to this operation system for avoiding the bottleneck. The analysis shows that metadata cache deallocation of current Linux causes large latency in file-request processing when the operational throughput of a file server becomes large. To eliminate the latency caused by metadata cache deallocation, a new method, called “split reclaim,” which divides metadata cache deallocation from conventional cache deallocation, is proposed. It is experimentally shown that the split-reclaim method reduces the worst response time by more than 95% and achieves three times higher throughput under a metadata-intensive workload. The split-reclaim method also reduces latency caused by cache deallocation under a general file-server workload by more than 99%. These results indicate that the split-reclaim method can eliminate metadata cache deallocation latency and make possible the use of commodity servers as enterprise file servers.
Archive | 2010
Keiichi Matsuzawa; Yasunori Kaneda
Archive | 2008
Keiichi Matsuzawa; Takahiro Nakano
Archive | 2010
Keiichi Matsuzawa
Archive | 2011
Atsushi Ueoka; Takaki Nakamura; Takayuki Fukatani; Keiichi Matsuzawa; Jun Nemoto; Atsushi Sutoh; Masaaki Iwasaki
Archive | 2006
Keiichi Matsuzawa; Takaki Nakamura; Koji Sonoda
Archive | 2010
Keiichi Matsuzawa
Archive | 2013
Keiichi Matsuzawa; Hitoshi Kamei